The document discusses various methods for constructing confidence intervals for estimating multinomial proportions. It aims to analyze the propensity for aberrations (i.e. unrealistic bounds like negative values) in the interval estimates across different classical and Bayesian methods. Specifically, it provides the mathematical conditions under which each method may produce aberrant interval limits, such as zero-width intervals or bounds exceeding 0 and 1, especially for small sample counts. The document also develops an R program to facilitate computational implementation of the various methods for applied analysis of multinomial data.
This document provides a summary of Chapter 14 from Aris Spanos' book on frequentist hypothesis testing. It begins with an overview of some of the inherent difficulties in teaching statistical testing, including that it introduces many new concepts and can be confusing. The document then provides a brief historical overview of the development of hypothesis testing, summarizing the contributions of Francis Edgeworth, Karl Pearson, and William Gosset. Edgeworth introduced the concepts of a hypothesis of interest, a standardized distance test statistic, and a threshold for significance. Pearson broadened the scope of hypotheses to include distributional assumptions and introduced the chi-square test and p-values. Gosset's work provided the foundation for modern statistical inference.
Statistical Measures of Location: Mathematical Formulas versus Geometric Appr...BRNSS Publication Hub
This paper illustrates with an example of the comparison of the geometrical and the numerical approaches
of measures of location. A geometrical derivation of the most popular measure of location (mean) was
derived from a histogram by determining the centroid of a histogram. The numerical or mathematical
expression of the other measures of location, median and mode were derived from ogive and histogram,
respectively. Finally, the research establishes that the two approaches produce the same results.
THEME – 2 On Normalizing Transformations of the Coefficient of Variation for a ...ICARDA
The document discusses symmetrizing and variance stabilizing transformations for normalizing statistical distributions. It examines some standard transformations like the arcsine transformation for binomial proportions and the square root transformation for Poisson variables. While variance stabilizing transformations stabilize variance, symmetrizing transformations directly reduce skewness for better normality. The document derives differential equations to obtain symmetrizing transformations and applies this to examples like the correlation coefficient, binomial proportion, Poisson variable, and chi-square distribution. It finds that in some cases, the symmetrizing transformation provides better normality than the variance stabilizing transformation. The document also discusses an application of these transformations.
The document presents a new method for forecasting enrollments at the University of Alabama using fuzzy time series. The method first partitions the historical enrollment data universe of discourse into intervals based on frequency density. It then defines fuzzy sets for each interval and fuzzifies the enrollment data. Fuzzy logical relationships are established based on the fuzzified enrollments. The method is then used to forecast enrollments from 1972 to 1992 and results are compared to existing fuzzy time series forecasting methods using average forecasting error rate and mean square error. The proposed method aims to improve forecasting accuracy over existing approaches.
Hyers ulam rassias stability of exponential primitive mappingAlexander Decker
This academic article discusses the Hyers-Ulam-Rassias stability of exponential primitive mappings. It begins with introducing the concepts of exponential primitive mappings, metric groups, and Hyers-Ulam-Rassias stability. It then proves a theorem showing that if an exponential primitive mapping G satisfies an inequality involving the sum of norms, then there exists a unique additive mapping T such that the difference between G and T is bounded above by a function of the norm. The proof constructs T as a limit and shows it has the required properties. The article concludes by mentioning potential directions for further research.
This document summarizes a study that used the fuzzy TOPSIS method to select the optimal type of spillway for a dam in northern Greece called Pigi Dam. Five alternative spillway types were evaluated based on nine criteria. The criteria were expressed as triangular fuzzy numbers to account for uncertainty. Weights for the criteria were determined using the AHP method and also expressed linguistically as fuzzy numbers. The fuzzy TOPSIS method was then used to rank the alternatives based on their distances from the ideal and negative-ideal solutions. The alternative with the highest relative closeness to the ideal solution was determined to be the optimal spillway type.
The document discusses smoothing parameter selection for density estimation from length-biased data using asymmetric kernels. It reviews recent work on applying Bayesian criteria for this purpose. The key points are: 1) Asymmetric kernels are better for density estimation of non-negative data as they avoid positive mass in negative regions. 2) Length-biased data arises in situations where observations are weighted by their values. 3) Estimating the underlying density from length-biased data requires adjusting for the bias. 4) Bayesian methods provide an approach for selecting the smoothing parameter for asymmetric kernel density estimators applied to length-biased data.
This document provides a summary of Chapter 14 from Aris Spanos' book on frequentist hypothesis testing. It begins with an overview of some of the inherent difficulties in teaching statistical testing, including that it introduces many new concepts and can be confusing. The document then provides a brief historical overview of the development of hypothesis testing, summarizing the contributions of Francis Edgeworth, Karl Pearson, and William Gosset. Edgeworth introduced the concepts of a hypothesis of interest, a standardized distance test statistic, and a threshold for significance. Pearson broadened the scope of hypotheses to include distributional assumptions and introduced the chi-square test and p-values. Gosset's work provided the foundation for modern statistical inference.
Statistical Measures of Location: Mathematical Formulas versus Geometric Appr...BRNSS Publication Hub
This paper illustrates with an example of the comparison of the geometrical and the numerical approaches
of measures of location. A geometrical derivation of the most popular measure of location (mean) was
derived from a histogram by determining the centroid of a histogram. The numerical or mathematical
expression of the other measures of location, median and mode were derived from ogive and histogram,
respectively. Finally, the research establishes that the two approaches produce the same results.
THEME – 2 On Normalizing Transformations of the Coefficient of Variation for a ...ICARDA
The document discusses symmetrizing and variance stabilizing transformations for normalizing statistical distributions. It examines some standard transformations like the arcsine transformation for binomial proportions and the square root transformation for Poisson variables. While variance stabilizing transformations stabilize variance, symmetrizing transformations directly reduce skewness for better normality. The document derives differential equations to obtain symmetrizing transformations and applies this to examples like the correlation coefficient, binomial proportion, Poisson variable, and chi-square distribution. It finds that in some cases, the symmetrizing transformation provides better normality than the variance stabilizing transformation. The document also discusses an application of these transformations.
The document presents a new method for forecasting enrollments at the University of Alabama using fuzzy time series. The method first partitions the historical enrollment data universe of discourse into intervals based on frequency density. It then defines fuzzy sets for each interval and fuzzifies the enrollment data. Fuzzy logical relationships are established based on the fuzzified enrollments. The method is then used to forecast enrollments from 1972 to 1992 and results are compared to existing fuzzy time series forecasting methods using average forecasting error rate and mean square error. The proposed method aims to improve forecasting accuracy over existing approaches.
Hyers ulam rassias stability of exponential primitive mappingAlexander Decker
This academic article discusses the Hyers-Ulam-Rassias stability of exponential primitive mappings. It begins with introducing the concepts of exponential primitive mappings, metric groups, and Hyers-Ulam-Rassias stability. It then proves a theorem showing that if an exponential primitive mapping G satisfies an inequality involving the sum of norms, then there exists a unique additive mapping T such that the difference between G and T is bounded above by a function of the norm. The proof constructs T as a limit and shows it has the required properties. The article concludes by mentioning potential directions for further research.
This document summarizes a study that used the fuzzy TOPSIS method to select the optimal type of spillway for a dam in northern Greece called Pigi Dam. Five alternative spillway types were evaluated based on nine criteria. The criteria were expressed as triangular fuzzy numbers to account for uncertainty. Weights for the criteria were determined using the AHP method and also expressed linguistically as fuzzy numbers. The fuzzy TOPSIS method was then used to rank the alternatives based on their distances from the ideal and negative-ideal solutions. The alternative with the highest relative closeness to the ideal solution was determined to be the optimal spillway type.
The document discusses smoothing parameter selection for density estimation from length-biased data using asymmetric kernels. It reviews recent work on applying Bayesian criteria for this purpose. The key points are: 1) Asymmetric kernels are better for density estimation of non-negative data as they avoid positive mass in negative regions. 2) Length-biased data arises in situations where observations are weighted by their values. 3) Estimating the underlying density from length-biased data requires adjusting for the bias. 4) Bayesian methods provide an approach for selecting the smoothing parameter for asymmetric kernel density estimators applied to length-biased data.
The document discusses I-functions and H-functions, which generalize Fox's H-function and contain certain Feynman integrals and partition functions as special cases. It establishes new double integral relations for products involving I-functions and H-functions. Specifically, it derives double integral relations for (A) a product of an I-function and H-function, and (B) the composition of an H-function with another H-function, under certain conditions on the parameters. These relations unify and extend previous known results for various special functions.
This document discusses orthogonalization methods for fast Bayesian model selection in linear regression. It explores using techniques like principal component analysis (PCA) and a novel decorrelation method (DECO) to orthogonalize the Gram matrix XT X. This orthogonalization aims to enable scalable Bayesian model selection and averaging when the number of predictors p is large, by making the computations more efficient. The document also considers situations where p is both small and large.
The document provides an overview of time series econometrics concepts including:
1) Time series econometrics analyzes the dynamic structure and interrelationships over time in economic data. It examines stationary and non-stationary stochastic processes.
2) A time series is stationary if its mean, variance, and autocovariance remain constant over time. A random walk process is a type of non-stationary process where the variable fluctuates around a stochastic trend.
3) The document discusses key time series econometrics models and techniques including unit root tests, vector autoregressive models, causality tests, cointegration, and error correction models.
An alternative approach to estimation of populationAlexander Decker
This document summarizes an alternative approach to estimating population mean in two-stage sampling. It proposes ratio, product, and regression estimators when the population mean of the auxiliary variable is unknown. The estimators are defined and their mean squared errors are derived. It is shown that under certain conditions, the suggested estimators are more efficient than existing estimators. Numerical examples are provided to support the theoretical results. In summary, this paper presents new estimators for population mean in two-stage sampling that make use of auxiliary information when the auxiliary population mean is unknown.
A note on bayesian one way repeated measurementsAlexander Decker
This document presents a Bayesian approach for analyzing one-way repeated measurements models. It discusses:
1) Deriving the likelihood function and prior distributions for the model parameters.
2) Calculating the posterior distributions of the model parameters based on the likelihood and prior distributions.
3) Computing the Bayes factor to compare the Bayesian mixed repeated measurements model to its fixed counterpart.
This document presents an algebraic algorithm for solving linear congruences and applies it to cryptography. It begins with background information on linear congruences and number theory. It then describes developing an algebraic algorithm that converts linear congruences into linear equations to solve them algebraically. This is a simpler approach than existing methods. Examples are provided to validate the algorithm. The paper also demonstrates applying the algorithm to solve linear congruences involved in the RSA public key cryptography system.
A proposed nth – order jackknife ridge estimator for linear regression designsAlexander Decker
This document proposes and evaluates an nth-order Jackknife Ridge estimator for linear regression to address issues of collinearity. Several existing estimators are introduced, including Ordinary Least Squares, Ridge, Jackknife Ridge, and second-order Jackknife Ridge. The document then proposes a general nth-order Jackknife Ridge estimator and develops it mathematically. Simulation results using five sample matrices show that higher-order Jackknife Ridge estimators have lower bias than lower-order estimators, with the bias decreasing by a factor of 10-4 with each increasing order. Higher-order estimators also have smaller variances than Ordinary Least Squares. The nth-order estimator approach provides a way to minimize bias and
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
This document is a curriculum vitae that provides biographical information about Hemant Kumar Nashine. It includes his education history, with degrees including a PhD in mathematics from 2007-2010. It also lists his employment history teaching mathematics at various universities in India from 2001-2015. Finally, it provides a bibliography of his 195 journal publications between 2004-2016, with several papers accepted or under review.
AN ADVANCED TOOL FOR MANAGING FUZZY COMPLEX TEMPORAL INFORMATIONcsandit
Many real-life applications need to handle and manage time pieces of information. Allen temporal relations are one of the most used and known formalisms for modelling and handling
temporal data. This paper discusses a novel idea to introduce some kind of flexibility in defining such relations between two fuzzy time intervals. The key concept of this approach is a fuzzy tolerance relation conveniently modelled. Tolerant Allen temporal relations are then defined using the dilated and the eroded intervals of the initial fuzzy time intervals. By leveraging some particular fuzzy indices to compare two fuzzy time intervals, this extension of Allen relations is integrated in the Fuzz-TIME system developed in our previous works.
The document describes q-Bernstein polynomials, which generalize Bernstein polynomials using q-integers. It proposes using q-Bernstein polynomials to estimate distribution and density functions nonparametrically from sample data. Specifically, it defines q-Bernstein distribution and density estimators, which reduce to standard Bernstein estimators when q=1. The paper studies the asymptotic properties of the estimators and performs numerical studies using cross-validation to select the smoothing parameter q. Figures show the estimators can provide a smooth approximation to distribution and density functions.
12 13 igcse int'l math extended (y1,2) topic overviewRoss
The document provides an overview of the curriculum for International Mathematics Extended at IGCSE levels for Year 1 and Year 2. It lists the main units covered each year including the topics within each unit, recommended resources, and number of lessons. The units cover areas of algebra, number, geometry, probability, statistics, coordinate geometry, and trigonometry.
Causal set theory is an approach to quantum gravity that represents spacetime as a locally finite partially ordered set of points with causal relations. It is a minimalist approach that does not assume an underlying spacetime continuum. There are two main methods to reconstruct a manifold from a causal set: 1) extracting manifold properties like dimension from causal sets that can be embedded in a manifold, and 2) sprinkling points randomly into an existing manifold to produce an embedded causal set. To study dynamics, an action must be defined on causal sets that reproduces the Einstein-Hilbert action in the continuum limit. Several proposals have been made to define nonlocal operators on causal sets that approach the d'Alembertian operator in the limit. Overall causal set
Abstract Quadripartitioned single valued neutrosophic (QSVN) set is a powerful structure where we have four components Truth-T, Falsity-F, Unknown-U and Contradiction-C. And also it generalizes the concept of fuzzy, initutionstic and single valued neutrosophic set. In this paper we have proposed the concept of K-algebras on QSVN, level subset of QSVN and studied some of the results. In addition to this we have also investigated the characteristics of QSVN Ksubalgebras under homomorphism.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Dimensionality Reduction Techniques In Response Surface Designsinventionjournals
Dimensionality reduction has enormous applications in various fields in industries. It can be applied in an optimal way with respect to time and cost related to the agricultural sciences, mechanical engineering, chemical technology, pharmaceutical sciences, clinical trials, biological studies, image processing, pattern recognitions etc. Several researchers made attempts on the reduction of the size of the model for different specific problems using some mathematical and statistical techniques identifying and eliminating some insignificant variables.This paper presents a review of the available literature on dimensionality reduction
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Stability criterion of periodic oscillations in a (4)Alexander Decker
1) The authors establish that the distribution of the harmonic mean of group variances is a generalized beta distribution through simulation.
2) They show that the generalized beta distribution can be approximated by a chi-square distribution.
3) This means that the harmonic mean of group variances is approximately chi-square distributed, though the degrees of freedom need not be an integer. Using the harmonic mean in place of the pooled variance allows hypothesis testing when group variances are unequal.
CHN and Swap Heuristic to Solve the Maximum Independent Set ProblemIJECEIAES
We describe a new approach to solve the problem to find the maximum independent set in a given Graph, known also as Max-Stable set problem (MSSP). In this paper, we show how Max-Stable problem can be reformulated into a linear problem under quadratic constraints, and then we resolve the QP result by a hybrid approach based Continuous Hopfeild Neural Network (CHN) and Local Search. In a manner that the solution given by the CHN will be the starting point of the local search. The new approach showed a good performance than the original one which executes a suite of CHN runs, at each execution a new leaner constraint is added into the resolved model. To prove the efficiency of our approach, we present some computational experiments of solving random generated problem and typical MSSP instances of real life problem.
This document appears to be presenting rug designs by designer Guy Laroche. It lists several rug names and patterns, including Lily Silver (Plum Silver), Tongo Wenge, Trace Wenge, and Gazebo Amethyst. The rugs are being presented by a company located in Ho Chi Minh City, Vietnam called Hassan's Carpets.
Developing for BlackBerry 10 – Tools and SDKs by Luca FilighedduCodemotion
La piattaforma BlackBerry 10, lanciata lo scorso 30 Gennaio, é attualmente la piattaforma mobile piú aperta presente sul mercato per quanto riguarda le tecnologie di sviluppo supportate. Gli sviluppatori possono agilmente portare il proprio codice su BB10 con il minimo sforzo e, grazie ai numerosi framework e SDK a disposizione, avere a portata di mano un’ambia scelta di strumenti per sfruttare al meglio la potenza del nuovo sistema operativo. Quale strategia adottare per portare la propria app su BlackBerry 10? Quali le opportunitá?
The document discusses I-functions and H-functions, which generalize Fox's H-function and contain certain Feynman integrals and partition functions as special cases. It establishes new double integral relations for products involving I-functions and H-functions. Specifically, it derives double integral relations for (A) a product of an I-function and H-function, and (B) the composition of an H-function with another H-function, under certain conditions on the parameters. These relations unify and extend previous known results for various special functions.
This document discusses orthogonalization methods for fast Bayesian model selection in linear regression. It explores using techniques like principal component analysis (PCA) and a novel decorrelation method (DECO) to orthogonalize the Gram matrix XT X. This orthogonalization aims to enable scalable Bayesian model selection and averaging when the number of predictors p is large, by making the computations more efficient. The document also considers situations where p is both small and large.
The document provides an overview of time series econometrics concepts including:
1) Time series econometrics analyzes the dynamic structure and interrelationships over time in economic data. It examines stationary and non-stationary stochastic processes.
2) A time series is stationary if its mean, variance, and autocovariance remain constant over time. A random walk process is a type of non-stationary process where the variable fluctuates around a stochastic trend.
3) The document discusses key time series econometrics models and techniques including unit root tests, vector autoregressive models, causality tests, cointegration, and error correction models.
An alternative approach to estimation of populationAlexander Decker
This document summarizes an alternative approach to estimating population mean in two-stage sampling. It proposes ratio, product, and regression estimators when the population mean of the auxiliary variable is unknown. The estimators are defined and their mean squared errors are derived. It is shown that under certain conditions, the suggested estimators are more efficient than existing estimators. Numerical examples are provided to support the theoretical results. In summary, this paper presents new estimators for population mean in two-stage sampling that make use of auxiliary information when the auxiliary population mean is unknown.
A note on bayesian one way repeated measurementsAlexander Decker
This document presents a Bayesian approach for analyzing one-way repeated measurements models. It discusses:
1) Deriving the likelihood function and prior distributions for the model parameters.
2) Calculating the posterior distributions of the model parameters based on the likelihood and prior distributions.
3) Computing the Bayes factor to compare the Bayesian mixed repeated measurements model to its fixed counterpart.
This document presents an algebraic algorithm for solving linear congruences and applies it to cryptography. It begins with background information on linear congruences and number theory. It then describes developing an algebraic algorithm that converts linear congruences into linear equations to solve them algebraically. This is a simpler approach than existing methods. Examples are provided to validate the algorithm. The paper also demonstrates applying the algorithm to solve linear congruences involved in the RSA public key cryptography system.
A proposed nth – order jackknife ridge estimator for linear regression designsAlexander Decker
This document proposes and evaluates an nth-order Jackknife Ridge estimator for linear regression to address issues of collinearity. Several existing estimators are introduced, including Ordinary Least Squares, Ridge, Jackknife Ridge, and second-order Jackknife Ridge. The document then proposes a general nth-order Jackknife Ridge estimator and develops it mathematically. Simulation results using five sample matrices show that higher-order Jackknife Ridge estimators have lower bias than lower-order estimators, with the bias decreasing by a factor of 10-4 with each increasing order. Higher-order estimators also have smaller variances than Ordinary Least Squares. The nth-order estimator approach provides a way to minimize bias and
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
This document is a curriculum vitae that provides biographical information about Hemant Kumar Nashine. It includes his education history, with degrees including a PhD in mathematics from 2007-2010. It also lists his employment history teaching mathematics at various universities in India from 2001-2015. Finally, it provides a bibliography of his 195 journal publications between 2004-2016, with several papers accepted or under review.
AN ADVANCED TOOL FOR MANAGING FUZZY COMPLEX TEMPORAL INFORMATIONcsandit
Many real-life applications need to handle and manage time pieces of information. Allen temporal relations are one of the most used and known formalisms for modelling and handling
temporal data. This paper discusses a novel idea to introduce some kind of flexibility in defining such relations between two fuzzy time intervals. The key concept of this approach is a fuzzy tolerance relation conveniently modelled. Tolerant Allen temporal relations are then defined using the dilated and the eroded intervals of the initial fuzzy time intervals. By leveraging some particular fuzzy indices to compare two fuzzy time intervals, this extension of Allen relations is integrated in the Fuzz-TIME system developed in our previous works.
The document describes q-Bernstein polynomials, which generalize Bernstein polynomials using q-integers. It proposes using q-Bernstein polynomials to estimate distribution and density functions nonparametrically from sample data. Specifically, it defines q-Bernstein distribution and density estimators, which reduce to standard Bernstein estimators when q=1. The paper studies the asymptotic properties of the estimators and performs numerical studies using cross-validation to select the smoothing parameter q. Figures show the estimators can provide a smooth approximation to distribution and density functions.
12 13 igcse int'l math extended (y1,2) topic overviewRoss
The document provides an overview of the curriculum for International Mathematics Extended at IGCSE levels for Year 1 and Year 2. It lists the main units covered each year including the topics within each unit, recommended resources, and number of lessons. The units cover areas of algebra, number, geometry, probability, statistics, coordinate geometry, and trigonometry.
Causal set theory is an approach to quantum gravity that represents spacetime as a locally finite partially ordered set of points with causal relations. It is a minimalist approach that does not assume an underlying spacetime continuum. There are two main methods to reconstruct a manifold from a causal set: 1) extracting manifold properties like dimension from causal sets that can be embedded in a manifold, and 2) sprinkling points randomly into an existing manifold to produce an embedded causal set. To study dynamics, an action must be defined on causal sets that reproduces the Einstein-Hilbert action in the continuum limit. Several proposals have been made to define nonlocal operators on causal sets that approach the d'Alembertian operator in the limit. Overall causal set
Abstract Quadripartitioned single valued neutrosophic (QSVN) set is a powerful structure where we have four components Truth-T, Falsity-F, Unknown-U and Contradiction-C. And also it generalizes the concept of fuzzy, initutionstic and single valued neutrosophic set. In this paper we have proposed the concept of K-algebras on QSVN, level subset of QSVN and studied some of the results. In addition to this we have also investigated the characteristics of QSVN Ksubalgebras under homomorphism.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Dimensionality Reduction Techniques In Response Surface Designsinventionjournals
Dimensionality reduction has enormous applications in various fields in industries. It can be applied in an optimal way with respect to time and cost related to the agricultural sciences, mechanical engineering, chemical technology, pharmaceutical sciences, clinical trials, biological studies, image processing, pattern recognitions etc. Several researchers made attempts on the reduction of the size of the model for different specific problems using some mathematical and statistical techniques identifying and eliminating some insignificant variables.This paper presents a review of the available literature on dimensionality reduction
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Stability criterion of periodic oscillations in a (4)Alexander Decker
1) The authors establish that the distribution of the harmonic mean of group variances is a generalized beta distribution through simulation.
2) They show that the generalized beta distribution can be approximated by a chi-square distribution.
3) This means that the harmonic mean of group variances is approximately chi-square distributed, though the degrees of freedom need not be an integer. Using the harmonic mean in place of the pooled variance allows hypothesis testing when group variances are unequal.
CHN and Swap Heuristic to Solve the Maximum Independent Set ProblemIJECEIAES
We describe a new approach to solve the problem to find the maximum independent set in a given Graph, known also as Max-Stable set problem (MSSP). In this paper, we show how Max-Stable problem can be reformulated into a linear problem under quadratic constraints, and then we resolve the QP result by a hybrid approach based Continuous Hopfeild Neural Network (CHN) and Local Search. In a manner that the solution given by the CHN will be the starting point of the local search. The new approach showed a good performance than the original one which executes a suite of CHN runs, at each execution a new leaner constraint is added into the resolved model. To prove the efficiency of our approach, we present some computational experiments of solving random generated problem and typical MSSP instances of real life problem.
This document appears to be presenting rug designs by designer Guy Laroche. It lists several rug names and patterns, including Lily Silver (Plum Silver), Tongo Wenge, Trace Wenge, and Gazebo Amethyst. The rugs are being presented by a company located in Ho Chi Minh City, Vietnam called Hassan's Carpets.
Developing for BlackBerry 10 – Tools and SDKs by Luca FilighedduCodemotion
La piattaforma BlackBerry 10, lanciata lo scorso 30 Gennaio, é attualmente la piattaforma mobile piú aperta presente sul mercato per quanto riguarda le tecnologie di sviluppo supportate. Gli sviluppatori possono agilmente portare il proprio codice su BB10 con il minimo sforzo e, grazie ai numerosi framework e SDK a disposizione, avere a portata di mano un’ambia scelta di strumenti per sfruttare al meglio la potenza del nuovo sistema operativo. Quale strategia adottare per portare la propria app su BlackBerry 10? Quali le opportunitá?
Presentatie verzorgd op het Natinaal Congres Open Data 2013, te Eindhoven. Centrale boodschap: Ken je data! Presentatie laat zien hoe je uit data verhalen / kennis maakt, maar dit eigenlijk alleen kan doen als je de (herkomst van de) data kent!
Stora Enso hat sich zum Ziel gesetzt, an einer nachhaltigen Zukunft zu arbeiten. Dieses Grundprinzip erstreckt sich auch auf unseren Umgang mit Rohstoffen, Energie, Produkten und Lösungen. Wir legen Wert auf nachhaltige Entwicklung, was sich auch in unserer Umwelt- und Energierichtlinie widerspiegelt. Unsere Emissionen an Luft und Wasser lagen deutlich innerhalb der behördlich zulässigen Grenzwerte. 2011 wurden so wenig organische Stoffe in die Gewässer eingeleitet wie nie zuvor in der Geschichte
des Werks Nymölla – eine sehr erfreuliche Entwicklung. Wir setzen unser Programm zur Erhöhung der Energieeffizienz fort. In dieser Umwelterklärung beschreiben wir unsere umweltbezogenen Maßnahmen, die Überwachung unserer Umweltziele und unsere neuen Ziele. Der Umweltschutz ist Teil unseres Alltags, und wir bemühen uns um ständige Verbesserungen in diesem Bereich. Wir hoffen, Sie finden interessante Informationen in diesem Bericht. Bei Fragen können Sie sich jederzeit an uns wenden.
Este documento presenta los resultados regionales de España en la prueba PISA 2009. Muestra que las regiones de Castilla y León, Madrid y La Rioja tuvieron los puntajes promedio más altos, mientras que Canarias, Andalucía y Baleares tuvieron los más bajos. También analiza la varianza entre y dentro de las escuelas, así como el desarrollo educativo general de cada región basado en el rendimiento, la equidad y la calidad.
Presentacion juegos tradicionales huerta de murciajaortegaestrada
Este documento describe varios juegos tradicionales de la huerta de Murcia, España. Explica que los juegos tradicionales se transmiten de generación en generación y pueden estar asociados con un lugar en particular. Luego detalla las reglas y materiales necesarios para juegos populares como los bolos huertanos, el caliche, las canicas, la trompa, los rompes, la carrera de cintas, la cucaña y otros.
Wir unterstützen Sie bei der Manuskripterstellung, dem Layout und der Buchproduktion. Vielleicht entscheiden Sie sich ja auch fpür ein neuartiges Buchbild.
This document outlines the services provided for both a deep/spring clean and a recurrent clean. For the deep clean, all areas of the home are thoroughly cleaned including walls, windows, fixtures, appliances, cabinets, ovens, and floors. For the recurrent clean, high-touch areas are spot cleaned as needed including walls, sinks, toilets, and floors, with light dusting of other surfaces. Additional services like laundry and organizing are also offered. It is noted that no cleaning will be done of areas that cannot be reached by their three-step step stool.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document provides motivation for using a circular kernel density estimator for nonparametric density estimation of circular data. It describes how a simple approximation theory from linear kernel estimation can be adapted to the circular case by replacing the kernel with a sequence of periodic densities on [-π,π] that converge to a degenerate distribution at θ=0. It shows that the wrapped Cauchy density satisfies the conditions to serve as such a kernel, resulting in the circular kernel density estimator proposed in equation 1.12. This estimator is shown to converge uniformly to the true density f(θ) as the sample size increases, providing theoretical justification for its use in smooth nonparametric density estimation for circular variables.
Considerations on the genetic equilibrium lawIOSRJM
In the first part of the paper I willpresentabriefreview on the Hardy-Weinberg equilibrium and it's formulation in projective algebraicgeometry. In the second and last part I willdiscussexamples and generalizations on the topic
This document describes a new method called Likelihood-based Sufficient Dimension Reduction (LAD) for estimating the central subspace. LAD obtains the maximum likelihood estimator of the central subspace under the assumption of conditional normality of the predictors given the response. Analytically and in simulations, LAD is shown to often perform much better than existing methods like Sliced Inverse Regression, Sliced Average Variance Estimation, and Directional Regression. LAD also inherits useful properties from maximum likelihood theory, such as the ability to estimate the dimension of the central subspace and test conditional independence hypotheses.
This document provides an overview of probabilistic reasoning and uncertainty in knowledge representation. It discusses:
1) Using probability theory to represent uncertainty quantitatively rather than logical rules with certainty factors.
2) Key concepts in probability theory including random variables, probability distributions, joint probabilities, marginal probabilities, and conditional probabilities.
3) Representing a problem domain as a probabilistic model with a sample space of possible variable states.
4) Independence of variables allowing simpler computation of probabilities.
5) The document is an introduction to probabilistic reasoning concepts to be covered in more detail later, including Bayesian networks.
This document provides an overview of probabilistic reasoning and uncertainty in knowledge representation. It discusses:
1) Using probability theory to represent uncertainty quantitatively rather than logical rules with certainty factors.
2) Key concepts in probability theory including random variables, probability distributions, joint probabilities, marginal probabilities, and conditional probabilities.
3) Representing a problem domain as a probabilistic model with a sample space of possible variable states.
4) Independence of variables allowing simpler computation of probabilities.
5) The document is an introduction to probabilistic reasoning concepts to be covered in more detail later, including Bayesian networks.
Knowledge of cause-effect relationships is central to the field of climate science, supporting mechanistic understanding, observational sampling strategies, experimental design, model development and model prediction. While the major causal connections in our planet's climate system are already known, there is still potential for new discoveries in some areas. The purpose of this talk is to make this community familiar with a variety of available tools to discover potential cause-effect relationships from observed or simulation data. Some of these tools are already in use in climate science, others are just emerging in recent years. None of them are miracle solutions, but many can provide important pieces of information to climate scientists. An important way to use such methods is to generate cause-effect hypotheses that climate experts can then study further. In this talk we will (1) introduce key concepts important for causal analysis; (2) discuss some methods based on the concepts of Granger causality and Pearl causality; (3) point out some strengths and limitations of these approaches; and (4) illustrate such methods using a few real-world examples from climate science.
A Non Parametric Estimation Based Underwater Target ClassifierCSCJournals
Underwater noise sources constitute a prominent class of input signal in most underwater signal processing systems. The problem of identification of noise sources in the ocean is of great importance because of its numerous practical applications. In this paper, a methodology is presented for the detection and identification of underwater targets and noise sources based on non parametric indicators. The proposed system utilizes Cepstral coefficient analysis and the Kruskal-Wallis H statistic along with other statistical indicators like F-test statistic for the effective detection and classification of noise sources in the ocean. Simulation results for typical underwater noise data and the set of identified underwater targets are also presented in this paper.
This document summarizes the work of Working Group II on Probabilistic Numerics from the SAMSI QMC Transition Workshop. The working group aims to develop probabilistic numerical methods that provide a richer probabilistic quantification of numerical error in outputs, allowing for better statistical inference. Members of the working group have published several papers on topics like Bayesian probabilistic numerical methods for solving differential equations and performing integral approximations, and applying these methods to problems in mathematical epidemiology and industrial process monitoring. The group has also organized workshops and reading groups to discuss the development of probabilistic numerical methods.
Multiple Linear Regression Model with Two Parameter Doubly Truncated New Symm...theijes
This document presents a multiple linear regression model with errors that follow a two-parameter doubly truncated new symmetric distribution. The model extends traditional linear regression by allowing for non-Gaussian distributed errors that are finite and bounded. Properties of the doubly truncated new symmetric distribution are derived, including its probability density function and characteristic function. Maximum likelihood and ordinary least squares methods are used to estimate the model parameters. A simulation study compares the proposed model to existing models that assume Gaussian or new symmetric distributed errors.
This document proposes a method for linear regression on symbolic data where each observation is represented by a Gaussian distribution. It derives the likelihood function for such "Gaussian symbols" and shows that it can be maximized using gradient descent. Simulation results demonstrate that the maximum likelihood estimator performs better than a naive least squares regression on the mean of each symbol. The method extends classical linear regression to the symbolic data setting.
Since the advent of the horseshoe priors for regularization, global-local shrinkage methods have proved to be a fertile ground for the development of Bayesian theory and methodology in machine learning. They have achieved remarkable success in computation, and enjoy strong theoretical support. Much of the existing literature has focused on the linear Gaussian case. The purpose of the current talk is to demonstrate that the horseshoe priors are useful more broadly, by reviewing both methodological and computational developments in complex models that are more relevant to machine learning applications. Specifically, we focus on methodological challenges in horseshoe regularization in nonlinear and non-Gaussian models; multivariate models; and deep neural networks. We also outline the recent computational developments in horseshoe shrinkage for complex models along with a list of available software implementations that allows one to venture out beyond the comfort zone of the canonical linear regression problems.
Min-based qualitative possibilistic networks are one of the effective tools for a compact representation of
decision problems under uncertainty. The exact approaches for computing decision based on possibilistic
networks are limited by the size of the possibility distributions. Generally, these approaches are based on
possibilistic propagation algorithms. An important step in the computation of the decision is the
transformation of the DAG (Direct Acyclic Graph) into a secondary structure, known as the junction trees
(JT). This transformation is known to be costly and represents a difficult problem. We propose in this paper
a new approximate approach for the computation of decision under uncertainty within possibilistic
networks. The computing of the optimal optimistic decision no longer goes through the junction tree
construction step. Instead, it is performed by calculating the degree of normalization in the moral graph
resulting from the merging of the possibilistic network codifying knowledge of the agent and that codifying
its preferences.
This document discusses unifying Bayesian, frequentist, and fiducial statistical inferences through the concept of confidence distributions (CDs). CDs provide a sample-dependent distribution function for estimating parameters of interest that can be used to construct confidence intervals. Several examples are given that show how normalized likelihood functions, p-value functions, and other approaches can all be considered CDs. The document argues this provides a first-level conceptual union of the different statistical paradigms. A second-level union is proposed through viewing CDs, bootstrap distributions, fiducial distributions, and Bayesian posteriors as estimating parameters through artificial or "fake" data samples, representing uncertainty in a similar way across paradigms.
Survival analysis is an important method for analysis time to event data for biomedical and reliability applications. It is often done with semiparametric methods e.g. the Cox proportional hazards model. In this presentation I discuss an alternative parametric approach to survival analysis that can overcome some of the limitations of the Cox model and provide additional flexibility to the modeler. This approach may also be justified from a Bayesian perspective and the connection is shown as well. Simulations and case studies that illustrate the flexibility of the GAM approach for survival analysis and its equivalent performance to existing methods for survival data are discussed in the text.
The material presented herein are based on two publications:
1) Argyropoulos C, Unruh ML. Analysis of time to event outcomes in randomized controlled trials by generalized additive models. PLoS One. 2015 Apr 23;10(4):e0123784. doi: 10.1371/journal.pone.0123784. PMID: 25906075; PMCID: PMC4408032.
2)Bologa CG, Pankratz VS, Unruh ML, Roumelioti ME, Shah V, Shaffi SK, Arzhan S, Cook J, Argyropoulos C. High performance implementation of the hierarchical likelihood for generalized linear mixed models: an application to estimate the potassium reference range in massive electronic health records datasets. BMC Med Res Methodol. 2021 Jul 24;21(1):151. doi: 10.1186/s12874-021-01318-6. PMID: 34303362; PMCID: PMC8310602.
Data science is an area at the interface of statistics, computer science, and mathematics.
• Statisticians contributed a large inferential framework, important Bayesian perspectives, the bootstrap and CART and random forests, and the concepts of sparsity and parsimony.
• Computer scientists contributed an appetite for big, challenging problems.They also pioneered neural networks, boosting, PAC bounds, and developed programming languages, such as Spark and hadoop, for handling Big Data.
• Mathematicians contributed support vector machines, modern optimization, tensor analysis, and (maybe) topological data analysis.
Computational Pool-Testing with Retesting StrategyWaqas Tariq
Pool testing is a cost effective procedure for identifying defective items in a large population. It also improves the efficiency of the testing procedure when imperfect tests are employed. This study develops computational pool-testing strategy based on a proposed pool testing with re-testing strategy. Statistical moments based on this applied design have been generated. With advent of computers in 1980‘s, pool-testing with re-testing strategy under discussion is handled in the context of computational statistics. From this study, it has been established that re-testing reduces misclassifications significantly as compared to Dorfman procedure although re-testing comes with a cost i.e. increase in the number of tests. Re-testing considered improves the sensitivity and specificity of the testing scheme.
A note on estimation of population mean in sample survey using auxiliary info...Alexander Decker
1. The document proposes a class of estimators for estimating the population mean in two-phase sampling using auxiliary information.
2. Some common estimators like the ratio, product, and regression estimators are special cases within the proposed class. Expressions for bias and mean squared error of the estimators are obtained up to the first order of approximation.
3. Asymptotically optimum estimators are identified that have minimum mean squared error. The proposed class of estimators is found to perform better than usual ratio and other estimators for population mean estimation.
1. The document discusses implicit shape representations for liver segmentation from CT scans, comparing heat, signed distance, and Poisson transforms.
2. It evaluates these representations using principal component analysis to build a linear shape space model from training data.
3. Results show the Poisson transform provides the most stable and effective implicit representation for segmentation, outperforming other methods in experiments projecting new shapes into the learned shape space.
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
1. IOSR Journal of Mathematics (IOSR-JM)
e-ISSN: 2278-5728,p-ISSN: 2319-765X, Volume 7, Issue 4 (Jul. - Aug. 2013), PP 23-28
www.iosrjournals.org
www.iosrjournals.org 23 | Page
Mathematical Analysis of propensity of aberration on the
methods for interval estimation of the multinomial proportions
U.Sangeetha1
, M.Subbiah2
, M.R.Srinivasan3
1
(Department of Management Studies, SSN College of Engineering, Chennai)
2
(Department of Mathematics, L. N. Government College, Ponneri)
3
(Department of Statistics, University of Madras, Chennai)
Abstract: The multinomial distribution has found applications in many practical situations that deal with a
discrete random variable with k possibilities (k > 2). Interval estimation for the proportion parameter in a
special case k = 2, the binomial distribution has been studied extensively in literature. However for k > 2,
studies have focused the performance of estimation procedure mainly on coverage probabilities and yet there
are other important aspects such as propensity of aberration in the limits of confidence intervals and
computational issues. The present paper makes an attempt to look beyond coverage probabilities by marshalling
the existing procedures in classical and Bayesian approaches for k > 2. To alleviate the computational issues, a
comprehensive R program is also made available to facilitate the implementation of the procedures in both
classical and Bayesian statistical paradigms.
Keywords: Aberrations, Bayesian, Contingency Tables, Multinomial, Zero Width Intervals.
I. INTRODUCTION
Probability estimation has drawn an active research attention as it has found wider applications in many
areas. Statistical methods for estimating single binomial proportion or associated arithmetic forms of two
proportions have discussed extensively in statistical literature (Subbiah, 2009). This includes point and interval
estimation procedures and related properties such as coverage probabilities, expected length and existence of
unrealistic bounds (Newcombe, 1998 and Sweeting, 2004)
The problem of estimating simultaneous confidence intervals for multinomial proportion has witnessed
an active discussion in terms of theoretical, computational and application aspects. Different procedures have
been discussed extensively in the literature. Quesenberry and Hurst (1964), Goodman (1965), Fitzpatrick and
Scott (1987), Sison and Glaz (1995), Glaz and Sison (1999) can be considered as an earlier work. Also similar
to binomial proportion problems, Wald type intervals and Wilson intervals are also available. Further boot strap
methods (Jhun and Jeong, 2000), power divergence methods (Hou et al, 2003) are discussed for developing
simultaneous confidence intervals. Wang (2008) has proposed exact confidence coefficient; Chafai and
Concordat (2009) has discussed the problem especially for small samples.
Bayesian procedures have also been discussed in obtaining simultaneous confidence coefficients for
multinomial proportion. A conjugate prior model, multinomial – Dirichlet has been discussed extensively in
many literature and Gelman et al (2002) provides the details of the relations and effect of hyper parameters;
Tuyl et al (2009) have shown that the uniform prior could be considered as the consensus prior for multinomial
parameters and recently, Komaki (2012) has developed a prior based on a specific Dirichlet prior that would be
asymptotically minimax.
Further, the performance of methods is studied based on coverage probability length of intervals and
computational flexibility; small and large sample sizes; polarized cell counts are also considered in this
comparison. However, aberrations are not elaborately discussed especially on small samples though direct
adjustments on such limits are recommended in practice. Computational issues have also been discussed in
many studies and in particular, May and Johnson (1997, 2000) have listed the computational tools for most of
the available methods, however the availability of these procedures are limited to selected software so that most
of the applications might rely on a specific method or a software.
This paper has two fold objectives: discussion of possible aberration more generally for widely
applicable procedures in classical and Bayesian approaches. Similar discussions for other estimation problems
(for example, Newcombe, 1998) only illustrate the propensity of overshoot and a more general way to
understand such existence has been attempted to classify the methods.
Secondly, limited availability of procedures like Sison and Glaz in selected software that may deter its
usage (http://www.rforge.net/doc/packages/NCStats/gofCI.html) and an attempt has been made to develop
comprehensive computational tools for most of the methods in both classical and Bayesian methods. R
programming environment has been used to provide a tool for obtaining multinomial interval estimate based on
2. Mathematical Analysis of propensity of aberration on the methods for interval estimation of the
www.iosrjournals.org 24 | Page
Wald, Wilson, Quesenberry and Hurst (QH), Goodman (GM), Fitzpatrick and Scott (FS), Sison and
Glaz (SG) methods in classical inference and multinomial-Dirichlet distribution method in Bayesian procedure.
Numerical examples are presented to compare these methods especially on the small counts. Findings of this
work have encouraged to further investigate sparse data sets in large two dimensional categorical tables that are
being analyzed under multinomial sampling.
II. METHODS
To depict the multinomial sampling which is generally represented as a contingency table following list
of notations could be useful
I = No. of rows; J = No. of columns; k = IxJ
𝑛𝑖 = Cell counts
n = ∑ ni (Size of the sample)
πi = Population proportion of ith
cell where i = 1, 2,…,k
pi
= Sample proportion
MLE estimation of πi =
ni
N
(pi
)
B.
= Bayesian estimation of πi, and a dot represents the prior
A = χ2
α, 1 i.e, the upper 100 1-α %point of the chi square distribution
𝐴1 = χ2
α, k-1
𝐴2 = χ2 α
k,1
𝑍 𝛼 = 100 1-α % point of the standard normal distribution
Queensberry and Hurst (1964), have proposed interval based on Pearson statistic that has limits as
2npi +A1 ∓ A1(A1+4npi 1-pi )
2 A1+n
. Fitzpatrick and Scott (1987) have proposed confidence interval as pi
± Zα 2
1
2 n
. A
Wald type interval based on the multivariate central limit theorem has the form pi
±
Api(1-pi)
n
and subsequently a
continuity correction version of these limits is of the form pi
±
Api(1-pi)
n
+
1
2n
.
Further Sison and Glaz (1995) have proposed a method to determine simultaneous confidence intervals
for multinomial proportions; one method is based on the truncated Poisson random variable and associated
central and factorial moments. In this method, cell counts are assumed to be independent Poisson random
variable with parameter λi=nπi Since πi is unknown, λi is estimated as npi
=ni
If Ai= ni bi ≤ ni ≤ ai then p Ai,..Ak ∑ni
=n =
n!
nne-n
p(bi ≤ ni ≤k
i=1 ai)p(w=n) where w = Yi
k
i=1 and Yi is the
truncated Poisson variable in bi ,ai
∴p[bi ≤ ni ≤ ai] =
n!
nne-n
p(bi ≤ Zi ≤k
i=1 ai)p(w=n)
Also, ni~ Poisson λi , and hence p bi ≤ Zi ≤ ai = p(Zi=t)
ai
t=bi
= p(Zi=t)
ai
t=0 - p(Zi=t)
ai
t=0
= F ai - F(bj-1) where F is CPDF of Poisson random variable.
Further, the central factorial moments μ(r)
=
F a-r -F b-r-1
F a -F b-1
of truncated Poisson variable Yi and the central
moments (up to r=4) of Yi are calculated to obtain p(w=n). Then the limits of 100(1-α) % confidence limits are
obtained as LL = pi
-
C
n
and UL = pi
+
C
n
+
2δ
n
where C is a positive integer that satisfies the condition
ρ C ≤ 1-α ≤ ρ C+1 with ρ C =p[pi
-C n ≤ πi ≤ pi
+ C n]=1-α and δ =
1-α -ρ(C)
ρ C+1 -ρ(C)
.
The Bayesian approach for estimating cell probabilities for a two way contingency table under
multinomial sampling could be outlined with a two-step likelihood-prior combination. If the random vector
n=(n1,n2,…,nk) has a multinomial distribution with n trials and cell probabilities π1,π2,………….πk then the joint
pmf of (n1,n2,…,nk) is
k21 n
k
n
2
n
1
k21
k21 ...πππ
!!...nn!n
n!
)n,...,n,f(n
3. Mathematical Analysis of propensity of aberration on the methods for interval estimation of the
www.iosrjournals.org 25 | Page
The conjugate prior (Gelman et al, 2002) for the proportion parameters = (π1,π2,………….πk) could
be a multivariate generalization of Beta distribution known as Dirichlet α1,α2, ……αk with αj > 0 and PDF is
p θ =
Γ αi
Γ(αi)
π1
α1-1
π2
α2-1
………..πk
αk-1
, πj=1. Hence, the posterior distribution will be
Π θ/X α π1
α1+x1-1
…………..πk
αk+xk-1
= Dirichlet (α1+x1, α2+x2, ……αk+xk) up to normalizing constant. For a
non-informative prior, widely chosen values for )α,...,α,(α k21 includes αi=1 corresponds to the uniform prior
distribution, Jeffrey’s prior αi=1 and αi=1.41 corresponds to asymptotic prior (Komaki, 2012) distribution.
III. CONDITIONS FOR ABERRATIONS
In most of the studies, overshoot or zero width intervals have been illustrated based on the low (zero or
positive) cell counts. However, a more general scenario can be derived by considering the mathematical form of
each method to indicate the possible chances of aberrations. This section includes the possibilities of cells with
zero counts and the range of cell counts that may result with over shooting estimated values for the associated
lower and upper limits (LL & UL) of confidence intervals.
In Qusenberry and Hurst, if ni = 0 for some ‘i' then LL =
0+A1 - A1(A1+0)
2(A1+n)
=0 and UL =
A1
A1+n
. that yield fixed
limits for the cells with zero count.
Also LL < 0 is true
only if A1+2npi
- A1 A1+4npi
1-pi
< 0
only if A1+2npi
2
< A1
2
+ 4nA1pi
(1-pi
)
only if A1
2
+4n2
pi
2
+4npi
A1 < A1
2
+4nA1pi
-4nA1pi
2
only if 4n2
pi
2
< -4nA1pi
2
only if n < -A1 which is not possible.
Hence QH never have negative lower limits.
Similarly UL >1 is true only if 2npi
+A1 + A1 A1)+4nApi
1-pi
> 2(A1+n) and subsequent algebra
will yield a condition 0 > n 1-pi
+A1 1-pi
which is not possible (since n, 1-pi
, A1 are positive) and hence
QH will never provide UL which are more than 1. Since, Goodman and Wilson differ from Qusenberry and
Hurst only in the 𝜒2
value (A, A1, A2) similar conclusions can be made regarding these two procedures.
In Fitzpatrick and Scott method, when ni=0 corresponding LL and UL will be ∓
Zα 2
2 n
that obviously
yield a negative LL.
Also LL < 0 only if pi
-Zα 2
1
2 n
< 0
pi
< Zα 2
1
2 n
0 < ni < n
Zα 2
2
This could be possible and such a case could also be attributed towards polarized cell counts with
varying k. For example, if n = 36, 1 ≤ ni ≤ 5 will have negative lower limits for a 95% confidence interval.
Further, in this case if k is small say 4 and ni<5 for at least one i (say i=1) remaining three cells must be added
upto 32. A hypothetical example (1,14,12,9) would bring a negative Lower Limit for the cell i=1.
Also, in this case, UL > 1 only if pi
+Zα 2
1
2 n
>1, pi
>1-Zα 2
1
2 n
only if ni>n-
n
2
Zα 2 . This case is also
possible in practice and that might indicate the extremely distant values in the data set. Though FS has a
symmetry property, aberration is an issue which would exist in possible cases. Hypothetical data set is (73, 1, 1,
6) will bring Upper Limit > 1 in first case and negative Lower Limit in all other 3 cases.
In Wald Type, when ni=0, both Lower Limit and Upper Limit are 0 which is an existence of zero width
interval. Also, LL < 0 only if pi
-
Api(1-pi)
n
< 0
pi
2
<
A pi 1-pi
n
pi
<
A (1-pi)
n
n2
pi
< A n-ni
nni < An-Ani
4. Mathematical Analysis of propensity of aberration on the methods for interval estimation of the
www.iosrjournals.org 26 | Page
ni<
n
(n+A)
A = 1-
A
n+A
A = (AF)A where AF is an adjusting factor
An
A
1
Also, UL > 1 Only if pi
+
Api(1-pi)
n
> 1
Api 1-pi
n
> 1-pi
2
Api
n
> 1-pi
Api
n
> 1-pi
Ani>n2 n-ni
n
= n(n-ni)
ni> 1-
A
n+A
n
However if n is large then AF→1 thereby ni→n in ith
cell. This means nJ→0 as j≠i and could be an illustration
of polarized cell counts.
In Wald CC method, if ni=0, then limits for the corresponding intervals are ∓
1
2n
that yield an
obvious negative limits. Also, LL < 0 only if pi
-
Api(1-pi)
n
-
1
2n
<0
pi
-
1
2n
2
<
Api(1-pi)
n
ni
2
n2 +
1
4n2 -
ni
n2 <
A
n
pi
A
n
pi
2
ni
2
n2 +
1
4n2 -
ni
n2 <
(n-ni)Ani
n3
ni
2
n+A -ni n+nA +
n
4
<0
ni
-
<ni<ni
+
where ni
±
=
n n+A ± n2A2+2n2A-nA
2(n+A)
For, Sison and Glaz method pi
=
ni
n
LL=
ni
n
-
C
n
and hence whenever C>ni (for some i), LL could be
negative in ith
confidence interval. Similarly, pI
+
C
n
+
2δ
n
>1 only if ni+ C+2δ>n, only if ni>n-(C+2δ) where δ
depends on the distance between p(C), 1-α and p(C+1), such that δ≤1 so that -δ≥-1 ⇒ni>n- C+2δ > n-1 -C or
equivalently C>(n-1)-ni For any i then UL of ith
CI will be more than 1.
IV. Data Analysis
To illustrate the notion of estimation and the cases in which negative lower limits occur, four data sets
(referred to as I - IV) have been considered and presented in TABLE 1. The first data set have been used in
Chafai and Concordet (2009) which deal with the difference in behavior of male and female students with
respect to smoking habits; second one from Szyda et al (2008) that has been used to analyze DNA pooling
experiments with arctic fox; a data set from Sison and Glaz (1995), Hou et al (2003) is the third set; The fourth
one is a partial collection of data sets from the data repository (www.fars-nhtsa.gov/states/statesAlcohol.aspx)
that deals with fatal crashes and related information from National Highway Traffic Safety Administration’s
(NHTSA) Fatality Analysis Reporting System (FARS) and its supplemental files. FARS has a collection of most
or all fatal motor vehicle crashes that occur in United States of America. Further, alcohol related files are made
up of the counts of crashes and fatalities due to three different levels of blood level consumptions (BAC equals
to 0.00, between 0.01-0.07 and 0.08 or more). However, only nine tables for the year 2009 with wider range of
cell counts have been included for the analysis.
The data sets are collected in such a way to describe the presence of zeros; sparse non zero counts and
has wider range. Also, minimum count in I and III data sets is 3 and 56 respectively and that of IV varies from 2
to 14 over the nine tables. It has been observed that some procedures yield obvious negative lower limits
whereas Wald method provides zero width intervals (ZWI) for zero counts. However, the main interest of the
study lies on the aberrations that could occur for non-zero counts. In the case of data set I, uniformly three
methods W, WCC, FS and SG yield negative lower limits when the cell count is 3 and such a result does not
exist in the other methods. In fact QH, GM, and WS can be eliminated from such comparison as they never
exhibit the characteristic of aberration and similar conclusions can be made for the Bayesian procedures too.
5. Mathematical Analysis of propensity of aberration on the methods for interval estimation of the
www.iosrjournals.org 27 | Page
For data set II, Wald type intervals do not yield negative estimates for positive counts, whereas SG and
FS result with negative lower limits when the cell counts are 5, 14 and 10; further, when the cell count is 16, SG
has zero as lower limit but FS has a negative lower limit. In fact, the choice of c in the SG procedure will decide
such values when any cell count is equal to C, lower limit is zero for the lesser cell counts it is negative. This
kind of behavior warrants a deeper insight into deciding the sparse nature among positive cell counts and when
cell counts are highly polarized.
Similar conclusions are visible in the case of all nine tables of data set IV but a careful observation
throws the light for SG when the range of cell counts are very high as much as 176 does not result with a
negative lower limit for the cell count 14 in contrast to data set II. Hence, a quick decision could not be arrived
regarding the performance of SG when high polarized count exist (May and Johnson, 2000) in the given data
set; but the value of C is a notably influencing number for such cases in the estimated values. On the other hand,
with larger values (more than 56) present in data set III, none of the cells have aberrations though range is
notably high.
Further, TABLE 2 and TABLE 3 summarize the interval estimates (LL: lower limit and UL: upper
limit) for the multinomial proportions related to the data set III to facilitate a comparison of the seven classical
with three Bayesian methods. Bayesian methods yield least volume when compared to all other classical
methods and the estimates are quite closer to the methods due to Wilson and Wald CC. Comparisons among
classical procedures have been done already in many studies such as Hou et al (2003), this work does not
include similar effort among non-Bayesian methods. But an emphasis is made to note the advantage of using
Bayesian direct posterior estimation from Dirichlet distribution with standard objective priors that will yield
better results and involve lesser computation and never yield implausible estimates under different
circumstances of cell counts and size of the I x J tables.
V. CONCLUSION
Methods for estimating simultaneous multinomial proportions have found research attention in recent
times but not specific to the unrealistic confidence intervals. For example Lee et al (2011) have pointed out the
implausible limits for confidence interval as unrealistic. Though a simple practical remedy is visible such as
truncating the limits suitably, a systematic method to understand such impractical estimates has not been
developed at least in the case of multinomial proportions. This paper makes an attempt to derive suitable
mathematical expression for the propensity of aberrations in the method for estimating multinomial proportions.
In the present work, seven classical procedures and Bayesian method with three priors are compared
for the interval estimation of multinomial proportions. For the large samples and for an introductory level Wald
type intervals can be suggested which are computationally simpler yet do not posses essential other properties
(such as coverage probability) so as not to recommend for practical applications especially for low cell counts.
Though FS can be equally compared to Wald type intervals for its mathematical simplicity, this method
is also not a favorable choice when the cell counts are small. Wilson intervals and similar forms QH and GM are
providing plausible set of limits for all kind of data; but conservatism is an issue when k is comparatively large.
However, SG performs better in terms of achieving required coverage probability in all cases by the
virtue of its mathematical form; yet it suffers a limitation in terms of propensity of aberration whenever the cell
counts are polarized. Hence a careful judgment is needed to handle such limits which could occur even when a
cell count is as large as 14. The availability of SG has also made wide spread in this work, and hence among the
method considered in classical statistics, SG can be recommended with a caution about the range of cell counts.
This paper provides a beforehand indication for the propensity of aberrations in confidence limits.
On the other hand, Bayesian Multinomial-Dirichlet methodology has been revisited with two well
known objective priors (uniform and Jeffreys) together with a recently discussed asymptotic prior. It has been
observed that Bayesian methods are quite flexible in modeling, mathematical treatment, computation and its
uniform behavior over a variety of data, and never warrant corrections due to over shoot of the estimates.
Further, advances in simulation techniques encourage adopting Bayesian objective methods to obtain interval
estimates for multinomial proportions. Together with the existing literature on multinomial proportions, this
study provides a consolidated view to select the procedures based on some criteria such as size of the
contingency table (k), nature of cell counts and its range to indicate polarization, computational flexibility,
propensity of aberrations, and desirable frequentist properties. However, the choice between Bayesian and
classical methods need to be handled similar to other situations of data analyses.
REFERENCES
[1] M. Subbiah, Bayesian inference on sparseness in 2 × 2 contingency tables and its applications, Unpublished Ph.d Thesis.,
University of Madras, Chennai, 2009.
[2] R.G. Newcombe, Interval estimation for the difference between independent proportions: Comparison of eleven methods in
Statistics in Medicine, 17 (1998) 873-890.
6. Mathematical Analysis of propensity of aberration on the methods for interval estimation of the
www.iosrjournals.org 28 | Page
[3] J.M. Sweeting, J.A. Sutton, & C.P. Lambert, What to add to nothing? Use and avoidance of continuity corrections in meta-analysis
of sparse data. Statistics in Medicine, 23 (2004) 1351 –1375.
[4] C.P. Queensberry, & D.C. Hurst, Large Sample Simultaneous Confidence Intervals for Multinational Proportions, Technometrics, 6
(1964) 191-195.
[5] L.A. Goodman, On Simultaneous Confidence Intervals for Multinomial Proportions, Technometrics, 7 (1965) 247-254.
[6] S. Fitzpatrick & A. Scott, Quick simultaneous confidence interval for multinomial proportions, Journal of American Statistical
Association, 82 (1987) 875-878.
[7] P.C. Sison & J. Glaz, Simultaneous Confidence Intervals and Sample Size Determination for Multinomial Proportions, Journal of
the American Statistical Association, 90 (1995) 366-369.
[8] J. Glaz & P.C. Sison, Simultaneous confidence interval for multinomial proportions, Journal of Statistical planning and inference,
82 (1999) 251-262.
[9] M. Jhun & H.C. Jeong, Applications of bootstrap methods for categorical data analysis, Computational Statistics & Data Analysis,
35 (2000) 83-91.
[10] C.D. Hou, J. Chiang & J.J. Tai, A family of simultaneous confidence intervals for multinomial proportions, Computational
Statistics & Data Analysis, 43 (2003) 29-45.
[11] H. Wang, Exact confidence coefficients of simultaneous confidence intervals for multinomial proportions, Journal of Multivariate
Analysis, 99 (2008) 896-911.
[12] D. Chafai, D. Concordet, Confidence Regions for the multinomial parameter with small sample size, Journal of American Statistical
Association, 104 (2009) 1071-1079.
[13] A. Gelman, J.B. Carlin, H.S. Stern & D.B. Rubin, Bayesian Data Analysis (London: Chapman & Hall, 2002)
[14] F. Tuyl, R. Gerlach & K. Mengersen, Posterior predictive arguments in favor of the Bayes-Laplace prior as the consensus prior for
binomial and multinomial parameters, Bayesian Analysis, 4 (2009) 151-158.
[15] F. Komaki, Asymptotically minimax Bayesian predictive densities for multinomial models, Electronic Journal of Statistics, 6
(2012) 934-957.
[16] L.W. May & D.W. Johnson, Constructing simultaneous confidence intervals for multinomial proportions, Computer Methods and
Programs in Biomedicine, 53 (1997) 153-162.
[17] J. Szyda, Z. Liu, D.M. Zaton, H. Wierzbicki & A. Rzasa, Statistical aspects of genetic association testing in small samples, based on
selective DNA pooling data in the arctic fox. Journal of Applied Genetics 49 (2008) 81-92.
[18] Lee, P.R. Armstrong, J.A. Thomasson, R. Sui & M. Casada, Application of binomial and multinomial probability statistics to the
sampling design process of a global grain tracing and recall system. Food Control, 22 (2009) 1085-1094.
Table 1: Illustrative data sets and their basic characteristics
Data sets
Size of the
original
table
No. of
Cells
Zero entries
Range of table totals
No %
I 2 x 2 4 0 0 7
II 4 x 5 20 12 60 66
III 1 x 7 7 0 0 31
IV
1 x 3
(9 tables)
3 0 0
39,64,15,
46,66,75,
176,42,74
Table 2: Comparison of six 95% confidence interval resulted from classical procedures for the data set III
Cell
Wald Wald CC Wilson
Qusenberry-
Hurst
Goodman Fitzpatrick-Scott Sison-Glaz
LL UL LL UL LL UL LL UL LL UL LL UL LL UL
1 0.090 0.149 0.089 0.150 0.094 0.153 0.076 0.183 0.085 0.166 0.075 0.165 0.079 0.164
2 0.121 0.187 0.120 0.188 0.124 0.190 0.104 0.222 0.115 0.204 0.109 0.200 0.114 0.199
3 0.123 0.189 0.122 0.190 0.126 0.192 0.106 0.225 0.116 0.207 0.111 0.202 0.116 0.201
4 0.096 0.156 0.095 0.158 0.099 0.160 0.081 0.191 0.091 0.173 0.081 0.172 0.086 0.171
5 0.102 0.164 0.101 0.165 0.105 0.167 0.087 0.198 0.096 0.181 0.087 0.178 0.092 0.177
6 0.151 0.222 0.150 0.223 0.154 0.224 0.131 0.258 0.143 0.239 0.141 0.232 0.146 0.231
7 0.094 0.154 0.093 0.155 0.097 0.157 0.080 0.188 0.089 0.171 0.079 0.170 0.084 0.169
Vol-
ume
4.07 x 10-7
5.14 x 10-9
4.08 x 10-9
2.52 x 10-7
3.59 x 10-8
5.11 x 10-8
3.24 x 10-8
Table 3: Comparison of three 95% confidence interval resulted from Bayesian procedures for the data set III
Cell
Bayesian-1 Bayesian-2 Bayesian-3
LL UL LL UL LL UL
1 0.093 0.151 0.092 0.151 0.093 0.1505
2 0.123 0.188 0.123 0.188 0.1237 0.1874
3 0.125 0.191 0.124 0.191 0.1247 0.1902
4 0.099 0.159 0.090 0.158 0.0984 0.1581
5 0.104 0.164 0.109 0.166 0.1038 0.1648
6 0.152 0.223 0.153 0.223 0.1518 0.2208
7 0.097 0.156 0.096 0.156 0.0971 0.1556
Volume 3.76 x 10-9
3.96 x 10-9
3.53 x 10-9