International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Applied Mathematics and Sciences: An International Journal (MathSJ)mathsjournal
The main goal of this research is to give the complete conception about numerical integration including
Newton-Cotes formulas and aimed at comparing the rate of performance or the rate of accuracy of
Trapezoidal, Simpson’s 1/3, and Simpson’s 3/8. To verify the accuracy, we compare each rules
demonstrating the smallest error values among them. The software package MATLAB R2013a is applied to
determine the best method, as well as the results, are compared. It includes graphical comparisons
mentioning these methods graphically. After all, it is then emphasized that the among methods considered,
Simpson’s 1/3 is more effective and accurate when the condition of the subdivision is only even for solving
a definite integral.
Comparing the methods of Estimation of Three-Parameter Weibull distributionIOSRJM
This document compares different methods for estimating the parameters of a three-parameter Weibull distribution from sample data, including graphical methods, trial and error, Jiang/Murthy's approach, and maximum likelihood estimation. It applies these methods to simulated sample data from a Weibull distribution with known parameters to estimate the location, scale, and shape parameters. The trial and error method uses residual sum of squares to select the location parameter that provides the best fit. The maximum likelihood method derives equations that are solved numerically. The results show that all methods can estimate the parameters reasonably well, though accuracy varies depending on the full or censored sample size.
Linear regression [Theory and Application (In physics point of view) using py...ANIRBANMAJUMDAR18
Machine-learning models are behind many recent technological advances, including high-accuracy translations of the text and self-driving cars. They are also increasingly used by researchers to help in solving physics problems, like Finding new phases of matter, Detecting interesting outliers
in data from high-energy physics experiments, Founding astronomical objects are known as gravitational lenses in maps of the night sky etc. The rudimentary algorithm that every Machine Learning enthusiast starts with is a linear regression algorithm. In statistics, linear regression is a linear approach to modelling the relationship between a scalar response (or dependent variable) and one or more explanatory variables (or independent
variables). Linear regression analysis (least squares) is used in a physics lab to prepare the computer-aided report and to fit data. In this article, the application is made to experiment: 'DETERMINATION OF DIELECTRIC CONSTANT OF NON-CONDUCTING LIQUIDS'. The entire computation is made through Python 3.6 programming language in this article.
This document discusses derivatives, including their definition, history, real-life applications, and use in various sciences. Derivatives are defined as the instantaneous rate of change of one variable with respect to another and geometrically as the slope of a curve at a point. Their modern conception is credited to Isaac Newton and Gottfried Leibniz in the 17th century. Derivatives have applications in business for estimating profits and losses, and in automobiles to calculate speed and distance traveled from odometer and speedometer readings. They are also used in physics to define velocity and acceleration and in mathematics to study extreme values, mean value theorems, and curve sketching.
MIXTURES OF TRAINED REGRESSION CURVES MODELS FOR HANDWRITTEN ARABIC CHARACTER...gerogepatton
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
A note on estimation of population mean in sample survey using auxiliary info...Alexander Decker
1. The document proposes a class of estimators for estimating the population mean in two-phase sampling using auxiliary information.
2. Some common estimators like the ratio, product, and regression estimators are special cases within the proposed class. Expressions for bias and mean squared error of the estimators are obtained up to the first order of approximation.
3. Asymptotically optimum estimators are identified that have minimum mean squared error. The proposed class of estimators is found to perform better than usual ratio and other estimators for population mean estimation.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This presentation provides an introduction to differential calculus. It defines calculus and differentiation, and classifies calculus into differential calculus and integral calculus. Differential calculus deals with finding rates of change of functions with respect to variables using derivatives, while integral calculus involves determining lengths, areas, volumes, and solving differential equations using integrals. The presentation explains key calculus concepts like derivatives, differentiation, and differential curves. It concludes by presenting some common formulas for differentiation.
Applied Mathematics and Sciences: An International Journal (MathSJ)mathsjournal
The main goal of this research is to give the complete conception about numerical integration including
Newton-Cotes formulas and aimed at comparing the rate of performance or the rate of accuracy of
Trapezoidal, Simpson’s 1/3, and Simpson’s 3/8. To verify the accuracy, we compare each rules
demonstrating the smallest error values among them. The software package MATLAB R2013a is applied to
determine the best method, as well as the results, are compared. It includes graphical comparisons
mentioning these methods graphically. After all, it is then emphasized that the among methods considered,
Simpson’s 1/3 is more effective and accurate when the condition of the subdivision is only even for solving
a definite integral.
Comparing the methods of Estimation of Three-Parameter Weibull distributionIOSRJM
This document compares different methods for estimating the parameters of a three-parameter Weibull distribution from sample data, including graphical methods, trial and error, Jiang/Murthy's approach, and maximum likelihood estimation. It applies these methods to simulated sample data from a Weibull distribution with known parameters to estimate the location, scale, and shape parameters. The trial and error method uses residual sum of squares to select the location parameter that provides the best fit. The maximum likelihood method derives equations that are solved numerically. The results show that all methods can estimate the parameters reasonably well, though accuracy varies depending on the full or censored sample size.
Linear regression [Theory and Application (In physics point of view) using py...ANIRBANMAJUMDAR18
Machine-learning models are behind many recent technological advances, including high-accuracy translations of the text and self-driving cars. They are also increasingly used by researchers to help in solving physics problems, like Finding new phases of matter, Detecting interesting outliers
in data from high-energy physics experiments, Founding astronomical objects are known as gravitational lenses in maps of the night sky etc. The rudimentary algorithm that every Machine Learning enthusiast starts with is a linear regression algorithm. In statistics, linear regression is a linear approach to modelling the relationship between a scalar response (or dependent variable) and one or more explanatory variables (or independent
variables). Linear regression analysis (least squares) is used in a physics lab to prepare the computer-aided report and to fit data. In this article, the application is made to experiment: 'DETERMINATION OF DIELECTRIC CONSTANT OF NON-CONDUCTING LIQUIDS'. The entire computation is made through Python 3.6 programming language in this article.
This document discusses derivatives, including their definition, history, real-life applications, and use in various sciences. Derivatives are defined as the instantaneous rate of change of one variable with respect to another and geometrically as the slope of a curve at a point. Their modern conception is credited to Isaac Newton and Gottfried Leibniz in the 17th century. Derivatives have applications in business for estimating profits and losses, and in automobiles to calculate speed and distance traveled from odometer and speedometer readings. They are also used in physics to define velocity and acceleration and in mathematics to study extreme values, mean value theorems, and curve sketching.
MIXTURES OF TRAINED REGRESSION CURVES MODELS FOR HANDWRITTEN ARABIC CHARACTER...gerogepatton
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
A note on estimation of population mean in sample survey using auxiliary info...Alexander Decker
1. The document proposes a class of estimators for estimating the population mean in two-phase sampling using auxiliary information.
2. Some common estimators like the ratio, product, and regression estimators are special cases within the proposed class. Expressions for bias and mean squared error of the estimators are obtained up to the first order of approximation.
3. Asymptotically optimum estimators are identified that have minimum mean squared error. The proposed class of estimators is found to perform better than usual ratio and other estimators for population mean estimation.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This presentation provides an introduction to differential calculus. It defines calculus and differentiation, and classifies calculus into differential calculus and integral calculus. Differential calculus deals with finding rates of change of functions with respect to variables using derivatives, while integral calculus involves determining lengths, areas, volumes, and solving differential equations using integrals. The presentation explains key calculus concepts like derivatives, differentiation, and differential curves. It concludes by presenting some common formulas for differentiation.
MIXTURES OF TRAINED REGRESSION CURVESMODELS FOR HANDRITTEN ARABIC CHARACTER R...ijaia
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
The Fundamental Theorem of Calculus states that the integral of a function f(x) from a to b is equal to F(b)-F(a), where F is an antiderivative of f. The theorem took hundreds of years to develop through the works of mathematicians like Aristotle, Archimedes, Oresme, Stevin, Galileo, Leibniz, Newton, and Cauchy. While important for understanding functions and rates of change, students often struggle with the theorem due to unwieldy textbooks and unengaging instruction. Teachers need to use more graphical and numerical examples to help students gain a deeper understanding of the fundamental concepts.
Determination of Area of Obscuration between Two Circles by Method of Integra...ijtsrd
Obscuration is the phenomenon of covering of a certain percentage of area of an object by another. It can be a governing variable in radiation heat transfer, fluid dynamics and so on. Although modern day analytical tools allow determination of area of obscuration, the non availability of handy formulae for theoretical calculations has lead to this study. In this study, the method of integration shall be used to determine the area of obscuration for various cases. Two circles of different radii shall be considered for this study. Various positions shall be considered and based on the combinations obtained, expressions for the area of obscuration shall be derived. This study aims at enumerating and exploring various cases so as to pave way towards a further study of general formulae to arrive at required expressions. Thus the theoretically obtained area can be used in various calculations as and when found of necessity. Gourav Vivek Kulkarni "Determination of Area of Obscuration between Two Circles by Method of Integration" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-1 , December 2020, URL: https://www.ijtsrd.com/papers/ijtsrd35863.pdf Paper URL : https://www.ijtsrd.com/mathemetics/calculus/35863/determination-of-area-of-obscuration-between-two-circles-by-method-of-integration/gourav-vivek-kulkarni
Principal Component Analysis and ClusteringUsha Vijay
Identifying the borrower segments from the give bank data set which has 27000 rows and 77 variable using PROC PRINCOMP. variables, it is important to reduce the data set to a smaller set of variables to derive a feasible
conclusion. With the effect of multicollinearity two or more variables can share the same plane in the in dimensions. Each row of the data can
be envisioned as a 77 dimensional graph and when we project the data as orthonormal, it is expected that the certain characteristics of the
data based on the plots to cluster together as principal components. In order to identify these principal components. PROC PRINCOMP is
executed with all the variables except the constant variables(recoveries and collection fees) and we derive a plot of Eigen values of all the
principal components
The document discusses curve fitting and interpolation techniques. It introduces least squares fitting as a method to determine the best coefficients for a linear model given a set of data. It describes how to perform linear and polynomial curve fitting in MATLAB using the polyfit and polyval functions. It also covers one-dimensional interpolation methods like linear, spline, and piecewise cubic interpolation in MATLAB using the interp1 function. An example demonstrates using these techniques to calculate interpolated y-values between given data points.
Naïve Bayes Machine Learning Classification with R Programming: A case study ...SubmissionResearchpa
This analytical review paper clearly explains Naïve Bayes machine learning techniques for simple probabilistic classification based on bayes theorem with the assumption of independence between the characteristics using r programming. Although there is large gap between which algorithm is suitable for data analysis when there was large categorical variable to be predict the value in research data. The model is trained in the training data set to make predictions on the test data sets for the implementation of the Naïve Bayes classification. The uniqueness of the technique is that gets new information and tries to make a better forecast by considering the new evidence when the input variable is of largely categorical in nature that is quite similar to how our human mind works while selecting proper judgement from various alternative of choices and can be applied in the neuronal network of the human brain does using r programming. Here researcher takes binary.csv data sets of 400 observations of 4 dependent attributes of educational data sets. Admit is dependent variable of gre, score gpa and rank of previous grade which ultimately determine whether student will be admitted or not for next program. Initially the gra and gpa variables has 0.36 percent significant in the association with rank categorical variable. The box plot and density plot demonstrate the data overlap between admitted and not admitted data sets. The naïve Bayes classification model classify the large data with 0.68 percent for not admitted where as 0.31 percent were admitted. The confusion matrix, and the prediction were calculated with 0.68 percent accuracy when 95 percent confidence interval. Similarly, the training accuracy is increased from 29 percent to 32 percent when naïve Bayes algorithm method as use kernel is equal to TRUE that ultimately decrease misclassification errors in the binary data sets. Yagyanath Rimal. (2019). Naïve Bayes Machine Learning Classification with R Programming: A case study of binary data sets . International Journal on Orange Technologies, 1(2), 27-34. Retrieved from https://journals.researchparks.org/index.php/IJOT/article/view/358 Pdf Url: https://journals.researchparks.org/index.php/IJOT/article/view/358/347 Paper Url: https://journals.researchparks.org/index.php/IJOT/article/view/358
Optimization of sample configurations for spatial trend estimationAlessandro Samuel-Rosa
Developed with Dick J Brus (Alterra, Wageningen University and Research Centre, the Netherlands), Gustavo M Vasques (Embrapa Soils, Brazil), Lúcia Helena Cunha dos Anjos (Universidade Federal Rural do Rio de Janeiro, Brazil). Presented at Pedometrics 2015, 14-18 September 2015, Córdoba, Spain.
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
This document summarizes a paper that analyzes compressive sampling (CS) for compressing and reconstructing electrocardiogram (ECG) signals using l1 minimization algorithms. It proposes remodeling the linear program problem into a second order cone program to improve performance metrics like percent root-mean-squared difference, compression ratio, and signal-to-noise ratio when reconstructing ECG signals from the PhysioNet database. The paper provides an overview of CS theory and l1 minimization algorithms, describes the proposed approach of using quadratic constraints, and defines performance metrics for analyzing reconstructed ECG signals.
The document discusses different methods for finding the roots or zeros of equations, where the root is the value of x that makes the equation f(x) equal to 0. It describes graphical methods, incremental search methods like bisection which divides the interval in half each iteration, and open methods like Newton-Raphson that use calculus to home in on the root. It also addresses challenges in finding multiple roots where the function is tangent to the x-axis at the root.
Reconstruction of Time Series using Optimal Ordering of ICA Componentsrahulmonikasharma
This document summarizes research investigating the application of independent component analysis (ICA) and optimal ordering of independent components to reconstruct time series generated by mixed independent sources. A modified fast neural learning ICA algorithm is used to obtain independent components from observed time series. Different error measures and algorithms for determining the optimal ordering of independent components for reconstruction are compared, including exhaustive search, using the L-infinity norm of components, and excluding the least contributing component first. Experimental results on artificial and currency exchange rate time series support using the Euclidean error measure and minimizing the error profile area to obtain the optimal ordering. Reconstructions with only the first few dominant independent components are often acceptable.
KNN and ARL Based Imputation to Estimate Missing Valuesijeei-iaes
Missing data are the absence of data items for a subject; they hide some information that may be important. In practice, missing data have been one major factor affecting data quality. Thus, Missing value imputation is needed. Methods such as hierarchical clustering and K-means clustering are not robust to missing data and may lose effectiveness even with a few missing values. Therefore, to improve the quality of data method for missing value imputation is needed. In this paper KNN and ARL based Imputation are introduced to impute missing values and accuracy of both the algorithms are measured by using normalized root mean sqare error. The result shows that ARL is more accurate and robust method for missing value estimation.
Applied Artificial Intelligence Unit 2 Semester 3 MSc IT Part 2 Mumbai Univer...Madhav Mishra
This document covers probability theory and fuzzy sets and fuzzy logic, which are topics for an applied artificial intelligence unit. It discusses key concepts for probability theory including joint probability, conditional probability, and Bayes' theorem. It also covers fuzzy sets and fuzzy logic, including fuzzy set operations, types of membership functions, linguistic variables, and fuzzy propositions and inference rules. Examples are provided throughout to illustrate probability and fuzzy set concepts. The document is presented as a slideshow with explanatory text and diagrams on each slide.
Efficiency of ratio and regression estimators using double samplingAlexander Decker
This document discusses different sampling methods for estimating population parameters, including ratio estimation, regression estimation, and simple random sampling without replacement. It compares the efficiency of using double sampling for ratio estimation versus double sampling for regression estimation versus simple random sampling without replacement. The key findings are that double sampling for regression estimation is more efficient than double sampling for ratio estimation or simple random sampling without replacement if the regression line does not pass through the origin. Relative efficiency and coefficient of variation are used to empirically compare the different estimators.
Principal Component Analysis (PCA) is a technique used to reduce the dimensionality of data by transforming it to a new coordinate system. It works by finding the principal components - linear combinations of variables with the highest variance - and using those to project the data to a lower dimensional space. PCA is useful for visualizing high-dimensional data, reducing dimensions without much loss of information, and finding patterns. It involves calculating the covariance matrix and solving the eigenvalue problem to determine the principal components.
Interpolation is a method to estimate values between known data points. It can be used in computer science for tasks like image reconstruction, CAD modeling, and graphics. Secret sharing schemes also use interpolation to distribute secret information among participants in a way that some minimum number of participants are needed to reconstruct the secret. Lagrange interpolation specifically involves defining a polynomial that passes through known data points, which can then be evaluated at other points to interpolate unknown values. While simple, it provides no error checking and mistakes could occur in calculations.
Solving Fuzzy Maximal Flow Problem Using Octagonal Fuzzy NumberIJERA Editor
In this paper a general fuzzy maximal flow problem is discussed . A crisp maximal flow problem can be solved
in two methods : linear programming modeling and maximal flow algorithm . Here I tried to fuzzify the
maximal flow algorithm using octagonal fuzzy numbers introduced by S.U Malini and Felbin .C. kennedy [26].
By ranking the octagonal fuzzy numbers it is possible to compare them and using this we convert the fuzzy
valued maximal flow algorithm to a crisp valued algorithm . It is proved that a better solution is obtained when
it is solved using fuzzy octagonal number than when it is solved using trapezoidal fuzzy number . To illustrate
this a numerical example is solved and the obtained result is compared with the existing results . If there is no
uncertainty about the flow between source and sink then the proposed algorithm gives the same result as in crisp
maximal flow problems.
This document discusses numerical methods for finding roots, or solutions, of equations. It begins by introducing graphical and closed interval methods, such as bisection, which use initial values that bracket the root. It then covers open methods that use a single initial value, such as fixed point iteration and Newton's method. The document also addresses methods for finding multiple roots and polynomial roots, including Muller's method, Bairstow's method, and synthetic division.
Parameter Estimation for the Exponential distribution model Using Least-Squar...IJMERJOURNAL
Abstract: We find parameter estimates of the Exponential distribution models using leastsquares estimation method for the case when partial derivatives were not available, the Nelder and Meads, and Hooke and Jeeves optimization methodswere used and for the case when first partial derivatives are available, the Quasi – Newton Method (Davidon-Fletcher-Powel (DFP) and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization methods)were applied. The medical data sets of 21Leukemiacancer patients with time span of 35 weeks ([3],[6]) were used.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
MIXTURES OF TRAINED REGRESSION CURVESMODELS FOR HANDRITTEN ARABIC CHARACTER R...ijaia
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
The Fundamental Theorem of Calculus states that the integral of a function f(x) from a to b is equal to F(b)-F(a), where F is an antiderivative of f. The theorem took hundreds of years to develop through the works of mathematicians like Aristotle, Archimedes, Oresme, Stevin, Galileo, Leibniz, Newton, and Cauchy. While important for understanding functions and rates of change, students often struggle with the theorem due to unwieldy textbooks and unengaging instruction. Teachers need to use more graphical and numerical examples to help students gain a deeper understanding of the fundamental concepts.
Determination of Area of Obscuration between Two Circles by Method of Integra...ijtsrd
Obscuration is the phenomenon of covering of a certain percentage of area of an object by another. It can be a governing variable in radiation heat transfer, fluid dynamics and so on. Although modern day analytical tools allow determination of area of obscuration, the non availability of handy formulae for theoretical calculations has lead to this study. In this study, the method of integration shall be used to determine the area of obscuration for various cases. Two circles of different radii shall be considered for this study. Various positions shall be considered and based on the combinations obtained, expressions for the area of obscuration shall be derived. This study aims at enumerating and exploring various cases so as to pave way towards a further study of general formulae to arrive at required expressions. Thus the theoretically obtained area can be used in various calculations as and when found of necessity. Gourav Vivek Kulkarni "Determination of Area of Obscuration between Two Circles by Method of Integration" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-1 , December 2020, URL: https://www.ijtsrd.com/papers/ijtsrd35863.pdf Paper URL : https://www.ijtsrd.com/mathemetics/calculus/35863/determination-of-area-of-obscuration-between-two-circles-by-method-of-integration/gourav-vivek-kulkarni
Principal Component Analysis and ClusteringUsha Vijay
Identifying the borrower segments from the give bank data set which has 27000 rows and 77 variable using PROC PRINCOMP. variables, it is important to reduce the data set to a smaller set of variables to derive a feasible
conclusion. With the effect of multicollinearity two or more variables can share the same plane in the in dimensions. Each row of the data can
be envisioned as a 77 dimensional graph and when we project the data as orthonormal, it is expected that the certain characteristics of the
data based on the plots to cluster together as principal components. In order to identify these principal components. PROC PRINCOMP is
executed with all the variables except the constant variables(recoveries and collection fees) and we derive a plot of Eigen values of all the
principal components
The document discusses curve fitting and interpolation techniques. It introduces least squares fitting as a method to determine the best coefficients for a linear model given a set of data. It describes how to perform linear and polynomial curve fitting in MATLAB using the polyfit and polyval functions. It also covers one-dimensional interpolation methods like linear, spline, and piecewise cubic interpolation in MATLAB using the interp1 function. An example demonstrates using these techniques to calculate interpolated y-values between given data points.
Naïve Bayes Machine Learning Classification with R Programming: A case study ...SubmissionResearchpa
This analytical review paper clearly explains Naïve Bayes machine learning techniques for simple probabilistic classification based on bayes theorem with the assumption of independence between the characteristics using r programming. Although there is large gap between which algorithm is suitable for data analysis when there was large categorical variable to be predict the value in research data. The model is trained in the training data set to make predictions on the test data sets for the implementation of the Naïve Bayes classification. The uniqueness of the technique is that gets new information and tries to make a better forecast by considering the new evidence when the input variable is of largely categorical in nature that is quite similar to how our human mind works while selecting proper judgement from various alternative of choices and can be applied in the neuronal network of the human brain does using r programming. Here researcher takes binary.csv data sets of 400 observations of 4 dependent attributes of educational data sets. Admit is dependent variable of gre, score gpa and rank of previous grade which ultimately determine whether student will be admitted or not for next program. Initially the gra and gpa variables has 0.36 percent significant in the association with rank categorical variable. The box plot and density plot demonstrate the data overlap between admitted and not admitted data sets. The naïve Bayes classification model classify the large data with 0.68 percent for not admitted where as 0.31 percent were admitted. The confusion matrix, and the prediction were calculated with 0.68 percent accuracy when 95 percent confidence interval. Similarly, the training accuracy is increased from 29 percent to 32 percent when naïve Bayes algorithm method as use kernel is equal to TRUE that ultimately decrease misclassification errors in the binary data sets. Yagyanath Rimal. (2019). Naïve Bayes Machine Learning Classification with R Programming: A case study of binary data sets . International Journal on Orange Technologies, 1(2), 27-34. Retrieved from https://journals.researchparks.org/index.php/IJOT/article/view/358 Pdf Url: https://journals.researchparks.org/index.php/IJOT/article/view/358/347 Paper Url: https://journals.researchparks.org/index.php/IJOT/article/view/358
Optimization of sample configurations for spatial trend estimationAlessandro Samuel-Rosa
Developed with Dick J Brus (Alterra, Wageningen University and Research Centre, the Netherlands), Gustavo M Vasques (Embrapa Soils, Brazil), Lúcia Helena Cunha dos Anjos (Universidade Federal Rural do Rio de Janeiro, Brazil). Presented at Pedometrics 2015, 14-18 September 2015, Córdoba, Spain.
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
This document summarizes a paper that analyzes compressive sampling (CS) for compressing and reconstructing electrocardiogram (ECG) signals using l1 minimization algorithms. It proposes remodeling the linear program problem into a second order cone program to improve performance metrics like percent root-mean-squared difference, compression ratio, and signal-to-noise ratio when reconstructing ECG signals from the PhysioNet database. The paper provides an overview of CS theory and l1 minimization algorithms, describes the proposed approach of using quadratic constraints, and defines performance metrics for analyzing reconstructed ECG signals.
The document discusses different methods for finding the roots or zeros of equations, where the root is the value of x that makes the equation f(x) equal to 0. It describes graphical methods, incremental search methods like bisection which divides the interval in half each iteration, and open methods like Newton-Raphson that use calculus to home in on the root. It also addresses challenges in finding multiple roots where the function is tangent to the x-axis at the root.
Reconstruction of Time Series using Optimal Ordering of ICA Componentsrahulmonikasharma
This document summarizes research investigating the application of independent component analysis (ICA) and optimal ordering of independent components to reconstruct time series generated by mixed independent sources. A modified fast neural learning ICA algorithm is used to obtain independent components from observed time series. Different error measures and algorithms for determining the optimal ordering of independent components for reconstruction are compared, including exhaustive search, using the L-infinity norm of components, and excluding the least contributing component first. Experimental results on artificial and currency exchange rate time series support using the Euclidean error measure and minimizing the error profile area to obtain the optimal ordering. Reconstructions with only the first few dominant independent components are often acceptable.
KNN and ARL Based Imputation to Estimate Missing Valuesijeei-iaes
Missing data are the absence of data items for a subject; they hide some information that may be important. In practice, missing data have been one major factor affecting data quality. Thus, Missing value imputation is needed. Methods such as hierarchical clustering and K-means clustering are not robust to missing data and may lose effectiveness even with a few missing values. Therefore, to improve the quality of data method for missing value imputation is needed. In this paper KNN and ARL based Imputation are introduced to impute missing values and accuracy of both the algorithms are measured by using normalized root mean sqare error. The result shows that ARL is more accurate and robust method for missing value estimation.
Applied Artificial Intelligence Unit 2 Semester 3 MSc IT Part 2 Mumbai Univer...Madhav Mishra
This document covers probability theory and fuzzy sets and fuzzy logic, which are topics for an applied artificial intelligence unit. It discusses key concepts for probability theory including joint probability, conditional probability, and Bayes' theorem. It also covers fuzzy sets and fuzzy logic, including fuzzy set operations, types of membership functions, linguistic variables, and fuzzy propositions and inference rules. Examples are provided throughout to illustrate probability and fuzzy set concepts. The document is presented as a slideshow with explanatory text and diagrams on each slide.
Efficiency of ratio and regression estimators using double samplingAlexander Decker
This document discusses different sampling methods for estimating population parameters, including ratio estimation, regression estimation, and simple random sampling without replacement. It compares the efficiency of using double sampling for ratio estimation versus double sampling for regression estimation versus simple random sampling without replacement. The key findings are that double sampling for regression estimation is more efficient than double sampling for ratio estimation or simple random sampling without replacement if the regression line does not pass through the origin. Relative efficiency and coefficient of variation are used to empirically compare the different estimators.
Principal Component Analysis (PCA) is a technique used to reduce the dimensionality of data by transforming it to a new coordinate system. It works by finding the principal components - linear combinations of variables with the highest variance - and using those to project the data to a lower dimensional space. PCA is useful for visualizing high-dimensional data, reducing dimensions without much loss of information, and finding patterns. It involves calculating the covariance matrix and solving the eigenvalue problem to determine the principal components.
Interpolation is a method to estimate values between known data points. It can be used in computer science for tasks like image reconstruction, CAD modeling, and graphics. Secret sharing schemes also use interpolation to distribute secret information among participants in a way that some minimum number of participants are needed to reconstruct the secret. Lagrange interpolation specifically involves defining a polynomial that passes through known data points, which can then be evaluated at other points to interpolate unknown values. While simple, it provides no error checking and mistakes could occur in calculations.
Solving Fuzzy Maximal Flow Problem Using Octagonal Fuzzy NumberIJERA Editor
In this paper a general fuzzy maximal flow problem is discussed . A crisp maximal flow problem can be solved
in two methods : linear programming modeling and maximal flow algorithm . Here I tried to fuzzify the
maximal flow algorithm using octagonal fuzzy numbers introduced by S.U Malini and Felbin .C. kennedy [26].
By ranking the octagonal fuzzy numbers it is possible to compare them and using this we convert the fuzzy
valued maximal flow algorithm to a crisp valued algorithm . It is proved that a better solution is obtained when
it is solved using fuzzy octagonal number than when it is solved using trapezoidal fuzzy number . To illustrate
this a numerical example is solved and the obtained result is compared with the existing results . If there is no
uncertainty about the flow between source and sink then the proposed algorithm gives the same result as in crisp
maximal flow problems.
This document discusses numerical methods for finding roots, or solutions, of equations. It begins by introducing graphical and closed interval methods, such as bisection, which use initial values that bracket the root. It then covers open methods that use a single initial value, such as fixed point iteration and Newton's method. The document also addresses methods for finding multiple roots and polynomial roots, including Muller's method, Bairstow's method, and synthetic division.
Parameter Estimation for the Exponential distribution model Using Least-Squar...IJMERJOURNAL
Abstract: We find parameter estimates of the Exponential distribution models using leastsquares estimation method for the case when partial derivatives were not available, the Nelder and Meads, and Hooke and Jeeves optimization methodswere used and for the case when first partial derivatives are available, the Quasi – Newton Method (Davidon-Fletcher-Powel (DFP) and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization methods)were applied. The medical data sets of 21Leukemiacancer patients with time span of 35 weeks ([3],[6]) were used.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The document analyzes water vapor weighting functions using radiosonde data from locations between 58°N and 45°S latitude for July and August. It calculates water vapor absorption coefficients and weighting functions at different frequencies and heights. The results show that weighting functions bend more sharply above 2-3 km, indicating better vertical resolution for retrieving atmospheric parameters above this height. Among the frequencies analyzed, 23.834 GHz provided the best vertical resolution above 2-3 km based on the bending of its weighting function curves.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) aims to cover the latest outstanding developments in the field of all Engineering Technologies & science.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Trabvajo de informatica alexander carrillo gomezCarryllito Gomez
Este documento presenta un trabajo de informática realizado por Alexander Carrillo para su profesor Clemente Silva en el año 2013. El trabajo incluye preguntas sobre las marcas líderes de procesadores, el mantenimiento preventivo y correctivo de equipos, y explicaciones breves sobre discos duros, memoria RAM, y CD-ROM/CD-RW.
El documento describe algunos de los destinos turísticos más populares del mundo, incluyendo el lago de Tota en Colombia, la Torre Eiffel en París, el Taj Mahal en la India y las Islas Galápagos de Ecuador. Estos sitios atraen a millones de visitantes cada año debido a su belleza natural e importancia cultural e histórica.
Las plantas son seres vivos capaces de fabricar materia orgánica usando elementos minerales y liberar oxígeno, lo que permitió la aparición y asentamiento de animales y humanos. Existen más de 300,000 especies de plantas conocidas. Las plantas vasculares son las llamadas plantas superiores que presentan diferenciación real de tejidos en raíz, tallo y hojas. Las plantas medicinales son recursos cuyas partes o extractos se usan como drogas para tratar afecciones. Las plantas maderables forman el tron
Este documento resume uma entrevista de 30 minutos com o treinador português Leonardo Jardim após deixar o clube grego Olympiacos de forma inesperada. A entrevista discute a saída de Jardim do Olympiacos, sua carreira anterior, e suas opiniões sobre o futebol português.
Essentials of an enterprise help desk software are defined which helps in selection of the right help desk software. Using a help desk that meets all your needs will help is provide good customer service.
Este documento proporciona instrucciones sobre cómo conjugar verbos en tiempo pasado y participio pasado en inglés. Explica que los verbos regulares terminan en "ed", mientras que los verbos irregulares tienen formas únicas. Agrupa los verbos irregulares en cinco categorías según cómo cambian en tiempo pasado y participio pasado. Usa el verbo "to call" como ejemplo para conjugar un verbo regular completamente.
El documento describe varios aspectos de la cultura peruana, incluyendo su gastronomía diversa, división geográfica, artesanías extendidas, fiestas populares anuales, música y danzas regionales como la Marinera, rituales como la ceremonia de Ayahuasca, e importancia de su gastronomía diversa.
La educadora dará una charla a estudiantes de nivel medio menor sobre la importancia de la leche. La charla destacará que la leche es fundamental para el crecimiento y para hacerse más fuerte, y explicará de dónde proviene la leche y para qué sirve. La duración de la charla será de 20 minutos.
Stress-Strength Reliability of type II compound Laplace distributionIRJET Journal
The document discusses estimating the reliability (R) of a stress-strength model where the strength (Y) and stress (X) variables follow a type II compound Laplace distribution. R is defined as the probability that stress is less than strength (Pr(X<Y)).
It first provides background on stress-strength models and reliability (R) in engineering. It then defines the type II compound Laplace distribution and derives an expression for R when X and Y follow this distribution. Maximum likelihood estimation is used to estimate the three parameters of the distribution. Simulation studies validate the estimation algorithm. Applications of stress-strength models are discussed in fields like engineering, medicine, and genetics.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Estimation of Reliability in Multicomponent Stress-Strength Model Following B...IJERA Editor
The stress-strength model for system reliability for multicomponent system when a device under consideration is a combination of 𝒦 usually independent components with strengths 𝑋1,𝑋2,…,𝑋𝒦 and each component experiencing a common stress 𝑌. The system is regarded as alive if at least𝒮 out of 𝒦 𝒮<𝒦 strengths exceed the stress. The reliability of such system is obtained when the stress and strengths are assumed to have Burr XII distributions with common shape parameter(𝑐). The research methodology adapted here is to estimate the parameters by using maximum likelihood method. Based on different types of ranked set sampling, the reliability estimators are obtained when samples drawn from strength and stress distributions. Efficiencies of reliability estimators based on ranked set sampling, median ranked set sampling, extreme ranked set sampling and percentile ranked set sampling are calculated with respect to reliability estimators based on simple random sampling procedure.
Multiple Linear Regression Model with Two Parameter Doubly Truncated New Symm...theijes
This document presents a multiple linear regression model with errors that follow a two-parameter doubly truncated new symmetric distribution. The model extends traditional linear regression by allowing for non-Gaussian distributed errors that are finite and bounded. Properties of the doubly truncated new symmetric distribution are derived, including its probability density function and characteristic function. Maximum likelihood and ordinary least squares methods are used to estimate the model parameters. A simulation study compares the proposed model to existing models that assume Gaussian or new symmetric distributed errors.
The effectiveness of various analytical formulas for
estimating R2 Shrinkage in multiple regression analysis was
investigated. Two categories of formulas were identified estimators
of the squared population multiple correlation coefficient (
2
)
and those of the squared population cross-validity coefficient
(
2 c
). The authors compeered the effectiveness of the analytical
formulas for determining R2 shrinkage, with squared population
multiple correlation coefficient and number of predictors after
finding all combination among variables, maximum correlation
was selected to computed all two categories of formulas. The
results indicated that Among the 6 analytical formulas designed to
estimate the population
2
, the performance of the (Olkin & part
formula-1 for six variable then followed by Burket formula &
Lord formula-2 among the 9 analytical formulas were found to be
most stable and satisfactory.
This document describes a new method called Likelihood-based Sufficient Dimension Reduction (LAD) for estimating the central subspace. LAD obtains the maximum likelihood estimator of the central subspace under the assumption of conditional normality of the predictors given the response. Analytically and in simulations, LAD is shown to often perform much better than existing methods like Sliced Inverse Regression, Sliced Average Variance Estimation, and Directional Regression. LAD also inherits useful properties from maximum likelihood theory, such as the ability to estimate the dimension of the central subspace and test conditional independence hypotheses.
A linear prediction based on logarithmic space (theoretically non-linear) is proposed, which is essentially a geometric series. The number of deaths of coal miners decreases year by year in that proportion.
This document summarizes a paper that introduces a mathematical approach to quantify errors resulting from reduced order modeling (ROM) techniques. ROM aims to reduce the dimensionality of input and output data for computationally intensive simulations like uncertainty quantification. The paper presents a method to calculate probabilistic error bounds for ROM that account for discarded model components. Numerical experiments on a pin cell reactor physics model demonstrate the ability to determine error bounds and validate them against actual errors with high probability. The error bounds approach could enable ROM techniques to self-adapt the level of reduction needed to ensure errors remain below thresholds for reliability.
Probabilistic Error Bounds for Reduced Order Modeling M&C2015Mohammad
This paper introduces a mathematical approach to quantify errors resulting from reduced order modeling (ROM) techniques. ROM works by discarding model components deemed to have negligible impact, but this introduces reduction errors. The paper derives an expression to calculate probabilistic error bounds for the discarded components. Numerical experiments on a pin cell model demonstrate the approach, showing the error bounds capture the actual errors with high probability, even when the ROM is applied under different physics conditions. The error bounding technique allows ROM algorithms to self-adapt and ensure reduction errors remain below user-defined tolerances.
Ratio and Product Type Estimators Using Stratified Ranked Set Samplinginventionjournals
In this paper, we propose a class of estimators for estimating the population mean of variable of interest using information on an auxiliary variable in stratified ranked set sampling. The bias and Mean Squared Error of proposed class of estimators are obtained to first degree of approximation. It has been shown that these methods are highly beneficial to the estimation based on Stratified Simple Random Sampling. Theoretically, it is shown that these suggested estimators are more efficient than the estimators in Stratified Random Sampling
Estimating Reconstruction Error due to Jitter of Gaussian Markov ProcessesMudassir Javed
This paper presents estimation of reconstruction error due to jitter of Gaussian Markov Processes. Two samples are considered for the analysis in two different situations. In one situation, the first sample does not have jitter while the other one is effected by jitter. In the second situation, both the samples are effected by jitter. The probability density functions of the jitter are given by Uniform Distribution and Erlang Distribution. Statistical averaging is applied to conditional expectation of random variable of jitter. From that, conditional variance is obtained which is defined as reconstruction error function and by knowing that, the reconstruction error of a Gaussian Markov Process is determined.
This document discusses modeling claim amounts in insurance using probability distributions and simulations. It begins with an introduction to fitting distributions to insurance claims data. The key steps are outlined as selecting an appropriate loss distribution, estimating its parameters using maximum likelihood estimation, and testing the fit using goodness of fit tests. The Pareto distribution is discussed in more detail as it is commonly used to model insurance claim amounts. The document concludes by describing how to simulate claim amounts above a deductible using the Pareto distribution and calculate reinsurance premiums.
Extreme bound analysis based on correlation coefficient for optimal regressio...Loc Nguyen
Regression analysis is an important tool in statistical analysis, in which there is a demand of discovering essential independent variables among many other ones, especially in case that there is a huge number of random variables. Extreme bound analysis is a powerful approach to extract such important variables called robust regressors. In this research, a so-called Regressive Expectation Maximization with RObust regressors (REMRO) algorithm is proposed as an alternative method beside other probabilistic methods for analyzing robust variables. By the different ideology from other probabilistic methods, REMRO searches for robust regressors forming optimal regression model and sorts them according to descending ordering given their fitness values determined by two proposed concepts of local correlation and global correlation. Local correlation represents sufficient explanatories to possible regressive models and global correlation reflects independence level and stand-alone capacity of regressors. Moreover, REMRO can resist incomplete data because it applies Regressive Expectation Maximization (REM) algorithm into filling missing values by estimated values based on ideology of expectation maximization (EM) algorithm. From experimental results, REMRO is more accurate for modeling numeric regressors than traditional probabilistic methods like Sala-I-Martin method but REMRO cannot be applied in case of nonnumeric regression model yet in this research.
Simulating Multivariate Random Normal Data using Statistical Computing Platfo...ijtsrd
This document describes how to simulate multivariate normal random data using the statistical computing platform R. It discusses two main decomposition methods for generating such data - Eigen decomposition and Cholesky decomposition. These methods decompose the variance-covariance matrix in different ways to simulate random normal values that match the desired mean and variance-covariance structure. The document provides code examples in R to implement both methods and compares the results. It finds that both methods accurately reproduce the target multivariate normal distribution.
This study evaluated the performance of bootstrap confidence intervals for estimating slope coefficients in Model II regression with three or more variables. Simulation studies were conducted for different correlation structures between variables, sampling from both normal and lognormal distributions. The results showed that bootstrap intervals provided less than the nominal 95% coverage. Scenarios with strong relationships between variables produced better coverage, while scenarios with weaker relationships and bias produced poorer coverage, even with larger sample sizes. Future work could explore additional scenarios and alternative interval methods to improve accuracy of confidence intervals in Model II regression.
1. Regression analysis is a statistical technique used to model relationships between variables and make predictions. It can be used to describe relationships, estimate coefficients, make predictions, and control systems.
2. Linear regression models describe straight-line relationships between variables, while non-linear models describe curved relationships. The goodness of fit of a model can be evaluated using the coefficient of determination.
3. The least squares method is used to fit regression lines by minimizing the sum of the squared vertical distances between observed and estimated y-values for a regression of y on x, or minimizing the sum of squared horizontal distances for a regression of x on y.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
Non-life claims reserves using Dirichlet random environmentIJERA Editor
The purpose of this paper is to propose a stochastic extension of the Chain-Ladder model in a Dirichlet random
environment to calculate the provesions for disaster payement. We study Dirichlet processes centered around the
distribution of continuous-time stochastic processes such as a Brownian motion or a continuous time Markov
chain. We then consider the problem of parameter estimation for a Markov-switched geometric Brownian
motion (GBM) model. We assume that the prior distribution of the unobserved Markov chain driving by the
drift and volatility parameters of the GBM is a Dirichlet process. We propose an estimation method based on
Gibbs sampling.
A Fuzzy Mean-Variance-Skewness Portfolioselection Problem.inventionjournals
A fuzzy number is a normal and convex fuzzy subsetof the real line. In this paper, based on membership function, we redefine the concepts of mean and variance for fuzzy numbers. Furthermore, we propose the concept of skewness and prove some desirable properties. A fuzzy mean-variance-skewness portfolio se-lection model is formulated and two variations are given, which are transformed to nonlinear optimization models with polynomial ob-jective and constraint functions such that they can be solved analytically. Finally, we present some numerical examples to demonstrate the effectiveness of the proposed models
This document summarizes a research paper that estimates the scale parameter of the Nakagami distribution using Bayesian methods. The paper derives the posterior distributions of the scale parameter under different prior distributions, including uniform, inverse exponential, and Levy priors. It then finds the Bayesian estimators of the scale parameter under three loss functions: squared error loss function, quadratic loss function, and precautionary loss function. The paper uses Monte Carlo simulations to compare the performance of the different estimators.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
1. Rania, M. Shalaby et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 3, Issue 6, Nov-Dec 2013, pp.1727-1733
RESEARCH ARTICLE
On the Estimation of
the Presence of k Outliers
www.ijera.com
OPEN ACCESS
for Weibull Distribution in
Amal S. Hassan1, Elsayed A. Elsherpieny2, and Rania M. Shalaby3
1
(Department of Mathematical Statistics, Institute of Statistical Studies& Research, Cairo University, Orman,
Giza, Egypt)
2
(Department of Mathematical Statistics, Institute of Statistical Studies& Research, Cairo University, Orman,
Giza, Egypt)
3
(The Higher Institute of Managerial Science, Culture and Science City, 6 th of October, Egypt)
ABSTRACT
This paper deals with the estimation problem of reliability
where X, Y and Z are
independently Weibull distribution with common known shape parameter but with different scale parameters, in
presence of k outliers in strength X. The moment, maximum likelihood and mixture estimators of R are derived.
An extensive computer simulation is used to compare the performance of the proposed estimators using
MathCAD (14). Simulation study showed that the mixture estimators are better and easier than the maximum
likelihood and moment estimators.
Keywords-Maximum likelihood estimator, mixture estimator, moment estimator, outliers and Weibull
distribution.
I.
Introduction
In the context of reliability the stress-strength
model describes the life of a component which has a
random strength X and is subjected to random stress
Y. This problem arises in the classical stress–strength
reliability where one is interested in estimating the
proportion of the times the random strength X of a
component exceeds the random stress Y to which the
component is subjected. If X ≤ Y, then either the
component fails or the system that uses the
component may malfunction, and there is no failure
when Y<X. The stress-strength models of the types
P(Y<X), P(Y<X<Z) have extensive applications in
various subareas of engineering, psychology, genetics,
clinical trials and so on. [See,Kotz[1]].
The germ of this idea was introduced by
Birnbaum [2] and developed by Birnbaum and
McCarty [3]. An important particular case is
estimation of R= P(Y<X<Z) which represents the
situation where the strength X should not only be
greater than stress Y but also be smaller than stress Z.
For example, many devices cannot function at high
temperatures; neither can do at very low ones.
Similarly, person's blood pressure has two limits
systolic and diastolic and his/her blood pressure
should lie within these limits.
Chandra and Owen [4] constructed maximum
likelihood estimators (MLEs) and uniform minimum
unbiased estimators (UMVUEs) for R= P(Y<X< Z).
Singh [5] presented the minimum variance unbiased,
maximum likelihood and empirical estimators of R=
P(Y<X<Z), where X, Y and Z are mutually
independent random variables and follow the normal
distribution. Dutta and Sriwastav[6] dealt with the
www.ijera.com
estimation of R when X, Y and Z are exponentially
distributed. Ivshin[7] investigated the MLE and
UMVUE of R when X, Y and Z are either uniform or
exponential random variables with unknown location
parameters.
Wang et al. [8] make statistical inference for
P(X<Y<Z) via two methods, the nonparametric
normal approximation and the jackknife empirical
likelihood, since the usual empirical likelihood
method for U-statistics is too complicated to apply.
The results of the simulation studies indicate that
these two methods work promisingly compared to
other existing methods. Some classical and real data
sets were analyzed using these two proposed methods.
The main aim of this article is to focus on the
estimation of R= P(Y<X<Z), under the assumption
that, X, Y and Z are independent. The stresses Y and Z
have Weibull distribution with common known shape
parameter β and scale parameters α and θ
respectively. While the strength X has Weibull
distribution with known shape parameter β and scale
parameter γ in presence of k outliers. Maximum
likelihood estimator, moment estimator (ME) and
mixture estimator (Mix) are obtained. Monte Carlo
simulation is performed for comparing different
methods of estimation.
The rest of the paper is organized as follows. In
Section 2, the estimation of R= P(Y<X< Z) will be
derived. The moment estimator of R derived in
Section 3. Section 4 discussed MLE of R. The mixture
estimator of R is obtained in Section 5. Monte Carlo
simulation results are laid out in Section 6. Finally,
conclusionsare presented in Section 7.
1727|P a g e
2. Rania, M. Shalaby et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 3, Issue 6, Nov-Dec 2013, pp.1727-1733
II.
Estimation of
This Section deals with estimate the reliability of
where Z, Y and X have Weibull distribution in the
presence of k outliers in the strength X, such that X, Y
and Z are independent.
Let
is the strength of
independent observations such that k of them are
distributed Weibully with scale parameter
shape
parameter and has the following probability density
function (pdf),
while the remaining
random variables are
distributed Weibully with scale parameter known
shape parameter and has the following pdf,
According to Dixit [9] and Dixit and Nasiri [10],
the joint distribution of
in the
presence of k outliers, can be expressed as
The corresponding
function (cdf)
cumulative
distribution
where
www.ijera.com
where
is the cdf of Z at ,
is the cdf of Y
at and
is the survival function of Z at .
Then, the reliability R in the presence of k outliers is
given by
Now to compute the estimate of R, the estimate
of the parameters
and must be obtained
firstly. Three well known methods, namely, maximum
likelihood, moments and mixture will be used to
obtain the estimate of the parameters and will be
discussed in details the following Sections.
III.
Moment Estimator of R
In this Section, the moment estimator of R
denoted by
will be obtained. The moment
estimators
and
of the
unknown
parameters
and will be obtained by equating
the population moments with the corresponding
sample moments.
The population means ofrandom stresses Y and Z
are given by
In addition the first and second population
moments of strength X are given by,
,
.
Let
is the stress of
independent observations of Weibull distribution with
scale parameter known shape parameter and has
the following pdf,
In addition, let
is the
stress of
independent observations of random Z
with known shape parameter
and scale
parameter and has the following pdf,
According to Singh [5], the reliability
takes the following form
Suppose that
be a random
sample of size
and
be a
random sample of size
drawn from Weibull
distribution with known shape parameter and scale
parameter
respectively. Then the means of
these samples are given by
Let
be a random sample of size
drawn from Weibull distribution with known shape
parameter and scale parameter , then the first and
second sample moments are given by
By equating the samples moments with the
corresponding population moments, then
www.ijera.com
1728|P a g e
3. Rania, M. Shalaby et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 3, Issue 6, Nov-Dec 2013, pp.1727-1733
IV.
and
The moment estimator of and denoted by
can be obtained from (8), respectively as,
The moment estimator of
obtained from (9) as follow
denoted by
www.ijera.com
Maximum Likelihood Estimator of R
This Section deals with MLE of reliability
when X, Y and Z are independent
Weibull
distribution
with
parameters
and
respectively and
the shape parameter is known. To compute the
MLE
for , firstly the MLEs
, and
must be obtained. The MLEs
, and
of the
parameters
and
are the values which
maximize the likelihood function.
To obtain the maximum Likelihood estimator
of , let
be a random sample of size
drawn from Weibull distribution with parameters
the likelihood function of observed sample
is given by
can be
The
denoted by
log-likelihood
function
is given by
Thus it must be first obtain the moment estimator
of denoted by , by substitute (12) in (10), therefore
The maximum likelihood estimate of say , is
obtained by setting the first partial derivatives of (15)
to zero
Let
,
and
Therefore,
if
is non-negative then the roots are
real. Therefore the moment estimator of
denoted
by , takes the following form,
Hence the moment estimator can be obtained
by substitute (13)in (12).
Finally, the moment estimator of , denoted by
is obtained by substitute
and in (7),
therefore
takes the following form,
By the similar way, to obtain the maximum
Likelihood estimator of , let
be a
random sample of size
drawn from Weibull
distribution with parameters
, then takes the
following form
To obtain the maximum Likelihood estimator
and of
, let
be a random
sample of size
drawn from Weibull distribution
with presence of k outliers with parameters
the likelihood function of observed sample can be
written as
www.ijera.com
1729|P a g e
4. Rania, M. Shalaby et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 3, Issue 6, Nov-Dec 2013, pp.1727-1733
www.ijera.com
where,
The first partial derivatives of the log–likelihood
(18) with respect to
are given, respectively
by,
The mixture estimator of , denoted by
obtained by substitute
, and in (7).
VI.
MLE’s of
denoted by
and
are
solution to the system of equations obtained by setting
the partial derivatives of the logarithm of likelihood
function (19) and (20) to be zero. Obviously, it is not
easy to obtain a closed form solution to this system of
equations. Therefore, an iterative method must be
applied to solve this equation numerically to
estimate
. The MLE of , denoted by
is
obtained by substitute
in (7).
V.
Mixture Estimator of R
To avoid the difficulty of complicated in the
system of likelihood equations, the mixture estimator
of R denoted by
will be obtained. Following Read
[11], the mixture estimator of
and denoted
by , and
will be derived by mixing between
moment estimates and MLEs which are obtained
previously in Sections 3 and 4.
The mixture estimator of
can be obtained
from moment estimator as follows
The mixture estimator of
can be obtained
from likelihood estimators by substitute as follows
is
Numerical Illustration
In this Section, an extensive numerical
investigation will be carried out to compare the
performance of the different estimators for different
sample sizes and parameter values for Weibull
distribution in the presence of k outliers. The
investigated properties are biases and mean square
errors (MSEs).All the computation is performed via
MathCAD (14) statistical package. The algorithm for
the
parameter estimation can be
summarized in the following steps:
Step
(1):
Generate
1000
random
samples
,
and
from
Weibull distribution with the sample sizes;
(15,15,15),(20,20,20),(25,25,25),
(15,15,20), (15,15,25), (15,15,20), (15,25,20),
(15,25,25), (20,15,15), (20,15,20), (20,15,25),
(20,25,20), (20,25,25), (25,15,15), (25,15,20),
(25,15,25), (25,20,20), (25,20,25).
Step (2): The parameter values are selected
as
Step (3): The moment estimator of and are
obtained by solving (11). ME of is obtained
numerically from (12). The ME of is obtained
by substitute
in (13). Once the estimate of
these estimator are computed, then
will be
obtainedusing (7).
Step (4): The MLE of and are obtained from
(16) and (17). The nonlinear equations (19) and
(20) of the MLEs will be solved iteratively using
NetwonRaphson method. The MLE of will be
obtained by substitute the MLEs of
in (7).
Step (5): The mixture estimator of and are
computed by using (21). NetwonRaphson method
is used for solving (22) and (23), to obtain the
www.ijera.com
1730|P a g e
5. Rania, M. Shalaby et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 3, Issue 6, Nov-Dec 2013, pp.1727-1733
estimators of
he mixture estimator of
will be obtained by substitute
in (7).
Step (6):The performance of the estimators can
be evaluated through some measures of accuracy
which are Biases and MSEs of
Simulation results are summarized in Tables 14.From these Tables, the following conclusions can be
observed on the properties of estimated parameters
from R.
1. The estimated value of
increases as the
value of outliers, increases.
2. The estimated values of based on moment
method is the smallest value, on the other
hand the estimated values of
based on
mixture method is the highest one.
3. The biases of
are the smallest relative
to the biases of
and
.
4. Comparing the MSEs of all estimators, the
mixture estimators perform the best
estimator.
5.
VII.
www.ijera.com
The biases and MSEs of based on different
estimators increase as the value of outliers
increases in almost all cases expect for some
few cases.
Conclusions
This article considered the problem of estimating
the reliability
for the Weibull
distribution with presence of k outliers in strength X.
Assuming that, X,Y and Z are independent with
common known shape parameter but with different
scale parameters. The moment, maximum likelihood
and mixture estimators of are derived. Performance
of estimators is usually evaluated through their biases
and MSEs.
Comparison study revealed that the mixture
estimator works the best with respect to biases and
MSEs, so the researcher strongly feels that mixture
estimator is better and easy to calculate than the
maximum likelihood and moment estimators.
In general, the mixture method for estimating
of the Weibull distribution in the
presence of k outliers issuggested to be used.
Table 1: Estimates of R, Biases and MSE’s of the point estimates from Weibull Distribution, when
Sample Size
(15,15,15)
(20,20,20)
(25,25,25)
(15,15,20)
(15,15,25)
(15,20,20)
(15,25,20)
(15,25,25)
(20,15,15)
(20,15,20)
(20,15,25)
(20,25,20)
(20,25,25)
(25,15,15)
(25,15,20)
(25,15,25)
(25,20,20)
(25,20,25)
www.ijera.com
Estimates of R
0.358
0.382
0.356
0.356
0.386
0.356
0.349
0.354
0.359
0.356
0.362
0.353
0.354
0.355
0.360
0.390
0.348
0.387
0.298
0.324
0.328
0.311
0.307
0.302
0.297
0.309
0.293
0.299
0.309
0.303
0.317
0.307
0.317
0.319
0.312
0.318
0.459
0.455
0.434
0.434
0.462
0.429
0.455
0.461
0.428
0.459
0.454
0.431
0.460
0.436
0.458
0.428
0.456
0.458
MLE
0.088
0.064
0.089
0.090
0.060
0.090
0.097
0.092
0.087
0.090
0.084
0.093
0.092
0.090
0.085
0.055
0.097
0.058
Bias
Mom
0.148
0.122
0.117
0.135
0.139
0.144
0.149
0.137
0.153
0.147
0.137
0.143
0.129
0.138
0.128
0.126
0.133
0.127
Mix
-0.013
-0.009
0.011
0.012
-0.016
0.017
-0.009
-0.015
0.018
-0.013
-0.008
0.015
-0.014
0.009
-0.013
0.017
-0.011
-0.013
MLE
0.073
0.066
0.079
0.068
0.073
0.071
0.078
0.077
0.074
0.075
0.069
0.078
0.081
0.082
0.076
0.068
0.079
0.069
MSE
Mom
0.112
0.123
0.117
0.129
0.122
0.119
0.126
0.117
0.122
0.119
0.111
0.124
0.123
0.119
0.114
0.116
0.118
0.111
Mix
0.019
0.017
0.016
0.021
0.015
0.018
0.021
0.017
0.016
0.024
0.018
0.013
0.015
0.023
0.019
0.017
0.014
0.011
1731|P a g e
7. Rania, M. Shalaby et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 3, Issue 6, Nov-Dec 2013, pp.1727-1733
www.ijera.com
Table 4: Estimates of R, Biases and MSE’s of the point estimates from Weibull Distribution, when
MLE
Sample Size
(15,15,15)
(20,20,20)
(25,25,25)
(15,15,20)
(15,15,25)
(15,20,20)
(15,25,20)
(15,25,25)
(20,15,15)
(20,15,20)
(20,15,25)
(20,25,20)
(20,25,25)
(25,15,15)
(25,15,20)
(25,15,25)
(25,20,20)
(25,20,25)
Bias
Mom
Mix
0.077
0.064
0.072
0.081
0.071
0.074
0.083
0.077
0.073
0.079
0.072
0.082
0.081
0.071
0.072
0.069
0.062
0.069
0.124
0.111
0.113
0.122
0.127
0.125
0.124
0.128
0.132
0.131
0.122
0.126
0.129
0.118
0.121
0.126
0.133
0.122
-0.026
-0.023
0.019
0.014
0.016
-0.018
-0.012
0.017
0.019
0.021
-0.018
0.013
-0.013
-0.023
0.013
-0.015
0.011
0.016
Estimates of R
0.453
0.466
0.458
0.449
0.459
0.456
0.447
0.453
0.457
0.451
0.458
0.448
0.449
0.459
0.458
0.461
0.468
0.461
0.406
0.419
0.417
0.408
0.403
0.405
0.406
0.402
0.398
0.399
0.408
0.404
0.401
0.412
0.409
0.404
0.397
0.408
0.556
0.553
0.511
0.516
0.514
0.548
0.542
0.513
0.511
0.509
0.548
0.517
0.543
0.553
0.517
0.545
0.519
0.514
MLE
MSE
Mom
Mix
0.074
0.071
0.068
0.076
0.072
0.070
0.068
0.064
0.071
0.079
0.074
0.069
0.077
0.082
0.081
0.074
0.076
0.071
0.076
0.064
0.066
0.081
0.073
0.084
0.081
0.089
0.071
0.082
0.077
0.076
0.071
0.073
0.075
0.081
0.080
0.077
0.019
0.018
0.018
0.015
0.014
0.019
0.017
0.015
0.018
0.014
0.019
0.014
0.010
0.013
0.017
0.011
0.017
0.015
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
S.Kotz, Y. Lumelskii, and M. Pensky, The
stressstrength
model
and
its
generalizations, “Theory and Applications”
(World Scientific Publishing Co.,2003).
Z. W.Birnbaum, On a use of the MannWhitney statistic, proceedings of the third
barkeley symposium on mathematical
statistics and probability, I, 1956, 13-17.
Z. W.Birnbaum, and R. C. McCarty,A
distribution free upper confidence bound for
Pr (Y<X), based on independent samples of
X and Y, Annals of Mathematical Statistics,
29, 1958, 557-562.
S. Chandraand D. B.Owen, On estimating
the reliability of acomponent subject to
several different stresses (strengths). Naval
Research Logistics Quarterly,22,1975, 3139.
N. Singh, On the estimation of
. Communication in Statistics Theory &
Methods, 9,1980, 1551-1561.
K. Dutta,and G. L.Sriwastav, An n-standby
system with P(X < Y < Z). IAPQR
Transaction, 12, 1986, 95-97.
V. V.Ivshin, O`n the estimation of the
probabilities of a doublelinear inequality in
the case of uniform and two-parameter
exponential distributions,
Journal of
Mathematical Science, 88, 1998, 819-827.
www.ijera.com
[8]
Z.Wang, W.Xiping, and P.Guangming,
Nonparametric statistical inference for P(X
<Y <Z). The Indian Journal of Statistics, 75A (1),2013, 118-138.
[9] U. J.Dixit, Estimation of parameters of the
gamma distribution in the presence of
outliers. Communications in statistics –
theory and methods, 18,1989, 3071-3085.
[10] U. J. Dixit, and P.F.Nasiri, Estimation of
parameters of the exponential distribution in
the presence of outliers generated from
uniform distribution. Metron, 49(3 - 4),2001,
187- 198.
[11] R. R.Read, Representation of certain
covariance matrices with application to
asymptotic efficiency. Journal of the
American Statistical Association, 76, 1981,
148-154.
1733|P a g e