This document presents a multiple linear regression model with errors that follow a two-parameter doubly truncated new symmetric distribution. The model extends traditional linear regression by allowing for non-Gaussian distributed errors that are finite and bounded. Properties of the doubly truncated new symmetric distribution are derived, including its probability density function and characteristic function. Maximum likelihood and ordinary least squares methods are used to estimate the model parameters. A simulation study compares the proposed model to existing models that assume Gaussian or new symmetric distributed errors.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
The effectiveness of various analytical formulas for
estimating R2 Shrinkage in multiple regression analysis was
investigated. Two categories of formulas were identified estimators
of the squared population multiple correlation coefficient (
2
)
and those of the squared population cross-validity coefficient
(
2 c
). The authors compeered the effectiveness of the analytical
formulas for determining R2 shrinkage, with squared population
multiple correlation coefficient and number of predictors after
finding all combination among variables, maximum correlation
was selected to computed all two categories of formulas. The
results indicated that Among the 6 analytical formulas designed to
estimate the population
2
, the performance of the (Olkin & part
formula-1 for six variable then followed by Burket formula &
Lord formula-2 among the 9 analytical formulas were found to be
most stable and satisfactory.
This document summarizes a research article about using particle swarm optimization to find different shrinkage parameters (k values) for each explanatory variable in ridge regression, rather than a single k value. Ridge regression is used to address multicollinearity issues in multiple regression analysis. Typically, ridge regression estimates a single k value, but this study uses an algorithm based on particle swarm optimization to estimate different k values for each variable. The study applies this new method to real data and simulations to evaluate its performance compared to other ridge regression methods.
Estimation of Reliability in Multicomponent Stress-Strength Model Following B...IJERA Editor
The stress-strength model for system reliability for multicomponent system when a device under consideration is a combination of 𝒦 usually independent components with strengths 𝑋1,𝑋2,…,𝑋𝒦 and each component experiencing a common stress 𝑌. The system is regarded as alive if at least𝒮 out of 𝒦 𝒮<𝒦 strengths exceed the stress. The reliability of such system is obtained when the stress and strengths are assumed to have Burr XII distributions with common shape parameter(𝑐). The research methodology adapted here is to estimate the parameters by using maximum likelihood method. Based on different types of ranked set sampling, the reliability estimators are obtained when samples drawn from strength and stress distributions. Efficiencies of reliability estimators based on ranked set sampling, median ranked set sampling, extreme ranked set sampling and percentile ranked set sampling are calculated with respect to reliability estimators based on simple random sampling procedure.
This document summarizes a study that used the fuzzy TOPSIS method to select the optimal type of spillway for a dam in northern Greece called Pigi Dam. Five alternative spillway types were evaluated based on nine criteria. The criteria were expressed as triangular fuzzy numbers to account for uncertainty. Weights for the criteria were determined using the AHP method and also expressed linguistically as fuzzy numbers. The fuzzy TOPSIS method was then used to rank the alternatives based on their distances from the ideal and negative-ideal solutions. The alternative with the highest relative closeness to the ideal solution was determined to be the optimal spillway type.
This document reviews a 2001 mathematical model by Feng, Sitek and Gullberg for determining the surface area and volume of the left ventricle (LV) based on its geometry as a truncated prolate spheroid. It finds that using the same model to calculate both volume and area results in a perfect linear correlation between the two due to their dependence on the model. Alternative models using Cartesian coordinates are explored that use echocardiographic measurements of the LV to calculate volume and area independently, finding still a very strong linear correlation between the two. The document concludes it is unrealistic to expect a perfect correlation when the same model is used to calculate both parameters.
Bayesian Variable Selection in Linear Regression and A ComparisonAtilla YARDIMCI
In this study, Bayesian approaches, such as Zellner, Occam’s Window and Gibbs sampling, have been compared in terms of selecting the correct subset for the variable selection in a linear regression model. The aim of this comparison is to analyze Bayesian variable selection and the behavior of classical criteria by taking into consideration the different values of β and σ and prior expected levels.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
The effectiveness of various analytical formulas for
estimating R2 Shrinkage in multiple regression analysis was
investigated. Two categories of formulas were identified estimators
of the squared population multiple correlation coefficient (
2
)
and those of the squared population cross-validity coefficient
(
2 c
). The authors compeered the effectiveness of the analytical
formulas for determining R2 shrinkage, with squared population
multiple correlation coefficient and number of predictors after
finding all combination among variables, maximum correlation
was selected to computed all two categories of formulas. The
results indicated that Among the 6 analytical formulas designed to
estimate the population
2
, the performance of the (Olkin & part
formula-1 for six variable then followed by Burket formula &
Lord formula-2 among the 9 analytical formulas were found to be
most stable and satisfactory.
This document summarizes a research article about using particle swarm optimization to find different shrinkage parameters (k values) for each explanatory variable in ridge regression, rather than a single k value. Ridge regression is used to address multicollinearity issues in multiple regression analysis. Typically, ridge regression estimates a single k value, but this study uses an algorithm based on particle swarm optimization to estimate different k values for each variable. The study applies this new method to real data and simulations to evaluate its performance compared to other ridge regression methods.
Estimation of Reliability in Multicomponent Stress-Strength Model Following B...IJERA Editor
The stress-strength model for system reliability for multicomponent system when a device under consideration is a combination of 𝒦 usually independent components with strengths 𝑋1,𝑋2,…,𝑋𝒦 and each component experiencing a common stress 𝑌. The system is regarded as alive if at least𝒮 out of 𝒦 𝒮<𝒦 strengths exceed the stress. The reliability of such system is obtained when the stress and strengths are assumed to have Burr XII distributions with common shape parameter(𝑐). The research methodology adapted here is to estimate the parameters by using maximum likelihood method. Based on different types of ranked set sampling, the reliability estimators are obtained when samples drawn from strength and stress distributions. Efficiencies of reliability estimators based on ranked set sampling, median ranked set sampling, extreme ranked set sampling and percentile ranked set sampling are calculated with respect to reliability estimators based on simple random sampling procedure.
This document summarizes a study that used the fuzzy TOPSIS method to select the optimal type of spillway for a dam in northern Greece called Pigi Dam. Five alternative spillway types were evaluated based on nine criteria. The criteria were expressed as triangular fuzzy numbers to account for uncertainty. Weights for the criteria were determined using the AHP method and also expressed linguistically as fuzzy numbers. The fuzzy TOPSIS method was then used to rank the alternatives based on their distances from the ideal and negative-ideal solutions. The alternative with the highest relative closeness to the ideal solution was determined to be the optimal spillway type.
This document reviews a 2001 mathematical model by Feng, Sitek and Gullberg for determining the surface area and volume of the left ventricle (LV) based on its geometry as a truncated prolate spheroid. It finds that using the same model to calculate both volume and area results in a perfect linear correlation between the two due to their dependence on the model. Alternative models using Cartesian coordinates are explored that use echocardiographic measurements of the LV to calculate volume and area independently, finding still a very strong linear correlation between the two. The document concludes it is unrealistic to expect a perfect correlation when the same model is used to calculate both parameters.
Bayesian Variable Selection in Linear Regression and A ComparisonAtilla YARDIMCI
In this study, Bayesian approaches, such as Zellner, Occam’s Window and Gibbs sampling, have been compared in terms of selecting the correct subset for the variable selection in a linear regression model. The aim of this comparison is to analyze Bayesian variable selection and the behavior of classical criteria by taking into consideration the different values of β and σ and prior expected levels.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document proposes generalized additive models (GAMs) to model conditional dependence structures between random variables. Specifically, it develops a GAM framework where a dependence or concordance measure between two variables is modeled as a parametric, non-parametric, or semi-parametric function of explanatory variables. It derives the root-n consistency and asymptotic normality of the maximum penalized log-likelihood estimator for the proposed GAMs. It also discusses details of the estimation procedure and selection of smoothing parameters.
A Singular Spectrum Analysis Technique to Electricity Consumption ForecastingIJERA Editor
Singular Spectrum Analysis (SSA) is a relatively new and powerful nonparametric tool for analyzing and forecasting economic data. SSA is capable of decomposing the main time series into independent components like trends, oscillatory manner and noise. This paper focuses on employing the performance of SSA approach to the monthly electricity consumption of the Middle Province in Gaza Strip\Palestine. The forecasting results are compared with the results of exponential smoothing state space (ETS) and ARIMA models. The three techniques do similarly well in forecasting process. However, SSA outperforms the ETS and ARIMA techniques according to forecasting error accuracy measures
This document discusses a one-way analysis of variance (ANOVA) used to compare the effects of different oil types (A, B, C) on car mileage. It tests the null hypothesis that the mean mileages are equal against the alternative that at least two means differ. The ANOVA calculates sums of squares and F statistics to determine if there are significant differences between the treatment means, rejecting the null hypothesis if F exceeds the critical value. If differences exist, pairwise comparisons estimate the size of differences between each pair of means using confidence intervals.
Shape analysis using conformal mapping is a technique that analyzes geometric shapes by mapping their connections and distributions into computer programs. It is useful in fields like archaeology, architecture, medical imaging, and computer design. Conformal mapping preserves angles between shapes and can map 2D and 3D shapes onto each other. For brain mapping specifically, spherical harmonics equations are used to compress brain surface data and compare different brain scans. Conformal mapping proves useful for medical applications due to its ability to uniformly map points on brain surfaces without distortion.
Finite Element Approach to the Solution of Fourth order Beam Equation:IJESM JOURNAL
Finite element method is a class of mathematical tool which approximates solutions to initial and
boundary value problems. Finite element, basic functions, stiffness matrices,systems of ordinary
differential equations and hence approximate solutions of partial differential equations which
involves rendering the partial differential equation into system of ordinary differential equations.
The ordinary differential equations are then numerically integrated.
We present a finite element approach in solving fourth order linear beam equation:
2 , tt xxxx u c u f x t , which arises in model studies of building structures wave theory.
In physical application of waves in building structures, coefficient 2 c , has the meaning of flexural
rigidity per linear mass density and f x,t external forcing term. In this paper, we give a solution
to the beam equation with 2 c 139and f x, t 100.
A Singular Spectrum Analysis Technique to Electricity Consumption ForecastingIJERA Editor
This document summarizes a research paper that uses Singular Spectrum Analysis (SSA) to forecast electricity consumption in the Middle Province of Gaza Strip. SSA is a nonparametric time series analysis technique that decomposes a time series into independent components like trends, oscillations, and noise. The paper applies SSA to monthly electricity consumption data from the region. It finds that SSA outperforms exponential smoothing and ARIMA models in forecast accuracy according to error measures. The paper provides an overview of the mathematical methodology behind SSA and its application to electricity consumption forecasting.
Statistics is the science of dealing with numbers and data. It involves collecting, summarizing, presenting, and analyzing data. There are four main steps: data collection, summarization by removing unwanted data and classifying/tabulating, presentation with diagrams/graphs/tables, and analysis using measures like average, dispersion, and correlation. Descriptive statistics summarize and describe data, while inferential statistics allow generalizing from samples to populations. Common descriptive statistics include measures of central tendency (mean, median, mode), variability (range, variance, standard deviation), and distribution properties. Inferential statistics techniques like hypothesis testing and ANOVA are used to make inferences about populations based on samples.
A Study of Some Tests of Uniformity and Their PerformancesIOSRJM
This document discusses a study that evaluates the performance of eleven tests for uniformity. The tests include Kolmogorov-Smirnov, Anderson-Darling, Cramer-von Mises, Watson, Sukhatme, probability product, Kuiper, Gini, ZhangA, ZhangC tests. Through simulation, the power of these tests is assessed under various sample sizes and five alternative distributions. The results are displayed in tables and graphs. The document concludes that the performance of the tests depends on the sample size and nature of the alternative distribution.
Stability criterion of periodic oscillations in a (4)Alexander Decker
1) The authors establish that the distribution of the harmonic mean of group variances is a generalized beta distribution through simulation.
2) They show that the generalized beta distribution can be approximated by a chi-square distribution.
3) This means that the harmonic mean of group variances is approximately chi-square distributed, though the degrees of freedom need not be an integer. Using the harmonic mean in place of the pooled variance allows hypothesis testing when group variances are unequal.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
A System of Estimators of the Population Mean under Two-Phase Sampling in Pre...Premier Publishers
This paper deals with estimation of the population mean under two-phase sampling. Utilizing information on two-auxiliary variables, a system of estimators for estimating the finite population mean is proposed and its properties, up to the first order of approximation, are studied. As particular cases various estimators are suggested. The performance of suggested estimators is compared with some contemporary estimators of the population mean through numerical illustrations carried over the data set of some natural populations. Also, a small-scale Monte Carlo simulation is carried out for the empirical comparison.
SEMI-PARAMETRIC ESTIMATION OF Px,y({(x,y)/x >y}) FOR THE POWER FUNCTION DISTR...IJESM JOURNAL
The stress-strength model describes the life of a component which has a random strength X and is subjected to random stress Y, in the context of reliability. The component will function satisfactorily whenever X>Y and it fails at the instant the stress applied to it exceeds the strength. R=P(Y<X) is a measure of component reliability .In this paper, we obtain semi parametric estimators of the reliability under stress- strength model for the Power function distribution under complete and censored samples. We illustrate the performance of the estimators using a simulation study.
This document discusses hypothesis testing using z- and t-tests. It begins by introducing key concepts like sampling distributions and the central limit theorem. It explains that as sample size increases, the sampling distribution of the mean approaches a normal distribution, even if the population is not normally distributed. It then provides an example to illustrate these concepts using a small population. The document discusses how the central limit theorem can be used to determine if a sampling distribution is approximately normal. It also explains that the rule of needing a sample size of 30 refers to approximating the t-distribution with the normal distribution, not the sampling distribution itself. Finally, it works through an example problem using a sampling distribution to solve a hypothesis test with a z-score.
Dimensional analysis is a technique for simplifying physical problems by reducing the number of variables. It is useful for presenting experimental data, solving problems without exact theories, checking equations, and establishing relative importance of phenomena. Dimensional analysis refers to the physical nature and units of quantities. It can be used to develop equations for fluid phenomena, convert between unit systems, and reduce variables in experiments. Two common methods are Rayleigh's method, which determines expressions involving a maximum of four variables, and Buckingham's π-method, which arranges variables into dimensionless groups. Dimensional analysis provides a partial solution and requires understanding the relevant phenomena.
Community Analysis of Fashion Coordination Using a Distance of Categorical Da...IJERA Editor
While various clustering methods have been proposed for categorical data, few studies exist on the clustering of the categorical variables that are represented as item sets (groups), e.g. fashion coordination data. In order to fill this gap, this study focuses on the patterns of similarities that are found between coordinated fashion items, so as to define a new set of score value rows for calculating similarity indexes from the patterns. These similarity indexes are then inverted into distances, based on which the clustering method proposed in this study (hereinafter ―the Proposed Method‖) is achieved. Then, it was do two evaluated experiment using about 150,000 coordination data obtained from the wear's web site. Firstly, a silhouette analysis confirms that the Proposed Method offers better cluster partitioning in comparison to previous studies. And secondly, an efficacy evaluation reveals seasonal patterns in fashion coordination, as well as trends for brands and colors of individual fashion items
A Stochastic Iteration Method for A Class of Monotone Variational Inequalitie...CSCJournals
We examined a general method for obtaining a solution to a class of monotone variational inequalities in Hilbert space. Let H be a real Hilbert space, and Let T : H -> H be a continuous linear monotone operator and K be a non empty closed convex subset of H. From an initial arbitrary point x0 ∈ K. We proposed and obtained iterative method that converges in norm to a solution of the class of monotone variational inequalities. A stochastic scheme {xn} is defined as follows: x(n+1) = xn - anF* (xn), n≥0, F*(xn), n ≥ 0 is a strong stochastic approximation of Txn - b, for all b (possible zero) ∈ H and an ∈ (0,1).
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
In Numerical analysis, interpolation is a manner of calculating the unknown values of a function for any conferred value of argument within the limit of the arguments. It provides basically a concept of estimating unknown data with the aid of relating acquainted data. The main goal of this research is to constitute a central difference interpolation method which is derived from the combination of Gauss’s third formula, Gauss’s Backward formula and Gauss’s forward formula. We have also demonstrated the graphical presentations as well as comparison through all the existing interpolation formulas with our propound method of central difference interpolation. By the comparison and graphical presentation, the new method gives the best result with the lowest error from another existing interpolationformula.
The Characteristics of Traffic Accidents Caused by Human Behavior on the Road...theijes
The document summarizes a study analyzing characteristics of traffic accidents caused by human behavior on Mayjen Sungkono Road in Malang City, Indonesia from 2008 to 2012. Some key findings include:
1) 87% of accidents were caused by private individuals.
2) Over 50% of accident perpetrators came from Malang district.
3) Nearly 50% of accident perpetrators and over 35% of pedestrian victims involved were between 26-45 years old.
A General-Purpose Architectural Approach to Energy Efficiency for Greendroid ...theijes
Mobile application processors are soon to replace desktop processors as the focus of innovation in microprocessor technology. Already, these processors have largely caught up to their more power hungry cousins, supporting out-of order execution and multicore processing. In the near future, the exponentially worsening problem of dark silicon is going to be the primary force that dictates the evolution of these designs. We have argued that the natural evolution of mobile application processors is to use this dark silicon to create hundreds of automatically generated energy-saving cores are called conservation cores, which can reduce energy consumption by an order of magnitude. Conservation cores(C-cores) try to solve utilization wall and consequently Dark Silicon issues.Greendroid is a development library for the android platform. It is intended to make UI developments easier and consistent through your applications. This paper describes Greendroid, a research prototype that demonstrates the use of such cores to save energy broadly across the hotspots in the android mobile phone software stack.
Availability of a Redundant System with Two Parallel Active Componentstheijes
This paper considers a redundant system which consists of two parallel active components. The time-to-failure
and the time-to-repair of the components follow an exponential and a general distribution, respectively. The
repairs of failed components are randomly interrupted. The time-to-interrupt is taken from an exponentially
distributed random variable and the interrupt times are generally distributed. We obtain the availability for the
system
Production of CH4 and C2 hydrocarbons by axial and radial pulse H2/CO2 discha...theijes
Production of methane CH4 from a mixture gas of carbon dioxide CO2 and hydrogen H2 has been established by two types of pulse discharges. One is an axial discharge with a use of thin pair Ni wire electrodes separated by a narrow gap, and the other is a coaxially radial discharge with a use of inner rod and outer tube electrodes made of stainless steel (SUS). The former provides an intense gap discharge, while the latter provides a gentle discharge in the annular region. Decomposition of CO2 is enhanced in the former case when Ni (nickel) mesh disc electrode is placed behind the gap. Ni is known as catalysis. When the radial discharge proceeds in a closed gas system, 2C hydrocarbons such as ethane and ethylene are generated in case that a cylindrical mesh electrode made of Ni is attached to the powered SUS tube electrode. Both of the CH4 production and the energy efficiency for CH4 production are enhanced in case of Ni mesh electrodes, without a use of additional heating for the Ni catalysis. Synergy effect of plasma and Ni catalyst is observed
Experimental Determination of Fracture Energy by RILEM Methodtheijes
This paper deals with investigation of fracture energy (GF) of concrete. The study involves experimental determination of fracture energy (GF) by testing three point bend concrete beams of same size but varying notch to depth ratios. RILEM fracture energy (GF) and Stress Intensity factor values is determined
This document proposes generalized additive models (GAMs) to model conditional dependence structures between random variables. Specifically, it develops a GAM framework where a dependence or concordance measure between two variables is modeled as a parametric, non-parametric, or semi-parametric function of explanatory variables. It derives the root-n consistency and asymptotic normality of the maximum penalized log-likelihood estimator for the proposed GAMs. It also discusses details of the estimation procedure and selection of smoothing parameters.
A Singular Spectrum Analysis Technique to Electricity Consumption ForecastingIJERA Editor
Singular Spectrum Analysis (SSA) is a relatively new and powerful nonparametric tool for analyzing and forecasting economic data. SSA is capable of decomposing the main time series into independent components like trends, oscillatory manner and noise. This paper focuses on employing the performance of SSA approach to the monthly electricity consumption of the Middle Province in Gaza Strip\Palestine. The forecasting results are compared with the results of exponential smoothing state space (ETS) and ARIMA models. The three techniques do similarly well in forecasting process. However, SSA outperforms the ETS and ARIMA techniques according to forecasting error accuracy measures
This document discusses a one-way analysis of variance (ANOVA) used to compare the effects of different oil types (A, B, C) on car mileage. It tests the null hypothesis that the mean mileages are equal against the alternative that at least two means differ. The ANOVA calculates sums of squares and F statistics to determine if there are significant differences between the treatment means, rejecting the null hypothesis if F exceeds the critical value. If differences exist, pairwise comparisons estimate the size of differences between each pair of means using confidence intervals.
Shape analysis using conformal mapping is a technique that analyzes geometric shapes by mapping their connections and distributions into computer programs. It is useful in fields like archaeology, architecture, medical imaging, and computer design. Conformal mapping preserves angles between shapes and can map 2D and 3D shapes onto each other. For brain mapping specifically, spherical harmonics equations are used to compress brain surface data and compare different brain scans. Conformal mapping proves useful for medical applications due to its ability to uniformly map points on brain surfaces without distortion.
Finite Element Approach to the Solution of Fourth order Beam Equation:IJESM JOURNAL
Finite element method is a class of mathematical tool which approximates solutions to initial and
boundary value problems. Finite element, basic functions, stiffness matrices,systems of ordinary
differential equations and hence approximate solutions of partial differential equations which
involves rendering the partial differential equation into system of ordinary differential equations.
The ordinary differential equations are then numerically integrated.
We present a finite element approach in solving fourth order linear beam equation:
2 , tt xxxx u c u f x t , which arises in model studies of building structures wave theory.
In physical application of waves in building structures, coefficient 2 c , has the meaning of flexural
rigidity per linear mass density and f x,t external forcing term. In this paper, we give a solution
to the beam equation with 2 c 139and f x, t 100.
A Singular Spectrum Analysis Technique to Electricity Consumption ForecastingIJERA Editor
This document summarizes a research paper that uses Singular Spectrum Analysis (SSA) to forecast electricity consumption in the Middle Province of Gaza Strip. SSA is a nonparametric time series analysis technique that decomposes a time series into independent components like trends, oscillations, and noise. The paper applies SSA to monthly electricity consumption data from the region. It finds that SSA outperforms exponential smoothing and ARIMA models in forecast accuracy according to error measures. The paper provides an overview of the mathematical methodology behind SSA and its application to electricity consumption forecasting.
Statistics is the science of dealing with numbers and data. It involves collecting, summarizing, presenting, and analyzing data. There are four main steps: data collection, summarization by removing unwanted data and classifying/tabulating, presentation with diagrams/graphs/tables, and analysis using measures like average, dispersion, and correlation. Descriptive statistics summarize and describe data, while inferential statistics allow generalizing from samples to populations. Common descriptive statistics include measures of central tendency (mean, median, mode), variability (range, variance, standard deviation), and distribution properties. Inferential statistics techniques like hypothesis testing and ANOVA are used to make inferences about populations based on samples.
A Study of Some Tests of Uniformity and Their PerformancesIOSRJM
This document discusses a study that evaluates the performance of eleven tests for uniformity. The tests include Kolmogorov-Smirnov, Anderson-Darling, Cramer-von Mises, Watson, Sukhatme, probability product, Kuiper, Gini, ZhangA, ZhangC tests. Through simulation, the power of these tests is assessed under various sample sizes and five alternative distributions. The results are displayed in tables and graphs. The document concludes that the performance of the tests depends on the sample size and nature of the alternative distribution.
Stability criterion of periodic oscillations in a (4)Alexander Decker
1) The authors establish that the distribution of the harmonic mean of group variances is a generalized beta distribution through simulation.
2) They show that the generalized beta distribution can be approximated by a chi-square distribution.
3) This means that the harmonic mean of group variances is approximately chi-square distributed, though the degrees of freedom need not be an integer. Using the harmonic mean in place of the pooled variance allows hypothesis testing when group variances are unequal.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
A System of Estimators of the Population Mean under Two-Phase Sampling in Pre...Premier Publishers
This paper deals with estimation of the population mean under two-phase sampling. Utilizing information on two-auxiliary variables, a system of estimators for estimating the finite population mean is proposed and its properties, up to the first order of approximation, are studied. As particular cases various estimators are suggested. The performance of suggested estimators is compared with some contemporary estimators of the population mean through numerical illustrations carried over the data set of some natural populations. Also, a small-scale Monte Carlo simulation is carried out for the empirical comparison.
SEMI-PARAMETRIC ESTIMATION OF Px,y({(x,y)/x >y}) FOR THE POWER FUNCTION DISTR...IJESM JOURNAL
The stress-strength model describes the life of a component which has a random strength X and is subjected to random stress Y, in the context of reliability. The component will function satisfactorily whenever X>Y and it fails at the instant the stress applied to it exceeds the strength. R=P(Y<X) is a measure of component reliability .In this paper, we obtain semi parametric estimators of the reliability under stress- strength model for the Power function distribution under complete and censored samples. We illustrate the performance of the estimators using a simulation study.
This document discusses hypothesis testing using z- and t-tests. It begins by introducing key concepts like sampling distributions and the central limit theorem. It explains that as sample size increases, the sampling distribution of the mean approaches a normal distribution, even if the population is not normally distributed. It then provides an example to illustrate these concepts using a small population. The document discusses how the central limit theorem can be used to determine if a sampling distribution is approximately normal. It also explains that the rule of needing a sample size of 30 refers to approximating the t-distribution with the normal distribution, not the sampling distribution itself. Finally, it works through an example problem using a sampling distribution to solve a hypothesis test with a z-score.
Dimensional analysis is a technique for simplifying physical problems by reducing the number of variables. It is useful for presenting experimental data, solving problems without exact theories, checking equations, and establishing relative importance of phenomena. Dimensional analysis refers to the physical nature and units of quantities. It can be used to develop equations for fluid phenomena, convert between unit systems, and reduce variables in experiments. Two common methods are Rayleigh's method, which determines expressions involving a maximum of four variables, and Buckingham's π-method, which arranges variables into dimensionless groups. Dimensional analysis provides a partial solution and requires understanding the relevant phenomena.
Community Analysis of Fashion Coordination Using a Distance of Categorical Da...IJERA Editor
While various clustering methods have been proposed for categorical data, few studies exist on the clustering of the categorical variables that are represented as item sets (groups), e.g. fashion coordination data. In order to fill this gap, this study focuses on the patterns of similarities that are found between coordinated fashion items, so as to define a new set of score value rows for calculating similarity indexes from the patterns. These similarity indexes are then inverted into distances, based on which the clustering method proposed in this study (hereinafter ―the Proposed Method‖) is achieved. Then, it was do two evaluated experiment using about 150,000 coordination data obtained from the wear's web site. Firstly, a silhouette analysis confirms that the Proposed Method offers better cluster partitioning in comparison to previous studies. And secondly, an efficacy evaluation reveals seasonal patterns in fashion coordination, as well as trends for brands and colors of individual fashion items
A Stochastic Iteration Method for A Class of Monotone Variational Inequalitie...CSCJournals
We examined a general method for obtaining a solution to a class of monotone variational inequalities in Hilbert space. Let H be a real Hilbert space, and Let T : H -> H be a continuous linear monotone operator and K be a non empty closed convex subset of H. From an initial arbitrary point x0 ∈ K. We proposed and obtained iterative method that converges in norm to a solution of the class of monotone variational inequalities. A stochastic scheme {xn} is defined as follows: x(n+1) = xn - anF* (xn), n≥0, F*(xn), n ≥ 0 is a strong stochastic approximation of Txn - b, for all b (possible zero) ∈ H and an ∈ (0,1).
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
In Numerical analysis, interpolation is a manner of calculating the unknown values of a function for any conferred value of argument within the limit of the arguments. It provides basically a concept of estimating unknown data with the aid of relating acquainted data. The main goal of this research is to constitute a central difference interpolation method which is derived from the combination of Gauss’s third formula, Gauss’s Backward formula and Gauss’s forward formula. We have also demonstrated the graphical presentations as well as comparison through all the existing interpolation formulas with our propound method of central difference interpolation. By the comparison and graphical presentation, the new method gives the best result with the lowest error from another existing interpolationformula.
The Characteristics of Traffic Accidents Caused by Human Behavior on the Road...theijes
The document summarizes a study analyzing characteristics of traffic accidents caused by human behavior on Mayjen Sungkono Road in Malang City, Indonesia from 2008 to 2012. Some key findings include:
1) 87% of accidents were caused by private individuals.
2) Over 50% of accident perpetrators came from Malang district.
3) Nearly 50% of accident perpetrators and over 35% of pedestrian victims involved were between 26-45 years old.
A General-Purpose Architectural Approach to Energy Efficiency for Greendroid ...theijes
Mobile application processors are soon to replace desktop processors as the focus of innovation in microprocessor technology. Already, these processors have largely caught up to their more power hungry cousins, supporting out-of order execution and multicore processing. In the near future, the exponentially worsening problem of dark silicon is going to be the primary force that dictates the evolution of these designs. We have argued that the natural evolution of mobile application processors is to use this dark silicon to create hundreds of automatically generated energy-saving cores are called conservation cores, which can reduce energy consumption by an order of magnitude. Conservation cores(C-cores) try to solve utilization wall and consequently Dark Silicon issues.Greendroid is a development library for the android platform. It is intended to make UI developments easier and consistent through your applications. This paper describes Greendroid, a research prototype that demonstrates the use of such cores to save energy broadly across the hotspots in the android mobile phone software stack.
Availability of a Redundant System with Two Parallel Active Componentstheijes
This paper considers a redundant system which consists of two parallel active components. The time-to-failure
and the time-to-repair of the components follow an exponential and a general distribution, respectively. The
repairs of failed components are randomly interrupted. The time-to-interrupt is taken from an exponentially
distributed random variable and the interrupt times are generally distributed. We obtain the availability for the
system
Production of CH4 and C2 hydrocarbons by axial and radial pulse H2/CO2 discha...theijes
Production of methane CH4 from a mixture gas of carbon dioxide CO2 and hydrogen H2 has been established by two types of pulse discharges. One is an axial discharge with a use of thin pair Ni wire electrodes separated by a narrow gap, and the other is a coaxially radial discharge with a use of inner rod and outer tube electrodes made of stainless steel (SUS). The former provides an intense gap discharge, while the latter provides a gentle discharge in the annular region. Decomposition of CO2 is enhanced in the former case when Ni (nickel) mesh disc electrode is placed behind the gap. Ni is known as catalysis. When the radial discharge proceeds in a closed gas system, 2C hydrocarbons such as ethane and ethylene are generated in case that a cylindrical mesh electrode made of Ni is attached to the powered SUS tube electrode. Both of the CH4 production and the energy efficiency for CH4 production are enhanced in case of Ni mesh electrodes, without a use of additional heating for the Ni catalysis. Synergy effect of plasma and Ni catalyst is observed
Experimental Determination of Fracture Energy by RILEM Methodtheijes
This paper deals with investigation of fracture energy (GF) of concrete. The study involves experimental determination of fracture energy (GF) by testing three point bend concrete beams of same size but varying notch to depth ratios. RILEM fracture energy (GF) and Stress Intensity factor values is determined
Concepts and Trends on E -Learning in Romaniatheijes
E-Learning systems analysis appears to be simple if we take into consideration some of the approaches employed by researchers so far. But the dynamics of e-Learning requires caution. For more than seven years there have been discussions about a new generation of e-Learning, namely e-Learning 2.0. The e-Learning market is estimated to bring revenue of U.S. $ 56.2 billion in 2013 according to Certifyme.net, the industry leader in online training, and this amount is projected to double its value by the end of 2015. In Romania, a team of researchers from the Centre for Development and Innovation in Education – has been using techniques aimed at identifying and classifying theoretical and practical approaches to training and education. If we consider the mission of this non-governmental organization (with no political affiliation) to promote the principles and values in education through innovative technologies and approaches (such as conducting programs and e-Learning projects, developing curricula, education for democratic citizenship, lifelong learning and continuous training of teachers) then we can rely on the experience of the organization and the seriousness with which the foundation is involved in defining and e-Learning phenomenon.
Retrospective and Prospective Studies of Gastro-Intestinal Helminths of Human...theijes
A five-year retrospective and one-year prospective studies of gastrointestinal (GIT) helminths was carried out in humans and dogs in Makurdi, Nigeria. Data from 534 individuals presented at the Federal Medical Centre (FMC) and 103 faecal samples from dogs at the Veterinary Teaching Hospital (VTH), University of Agriculture, Makurdi from 2007 to 2014 were used. The overall prevalence of zoonotic GIT helminths in humans was 76.21% (407/534) and 56.31% (58/103) in dogs. The differences in the prevalences in humans based on sex,ethnicity and age were not statistically significant (χ2 , P< 0.05). However, the test of individual factor (coefficient) on GIT helminthes in humans showed that hookworms prevalence was dependent on age (P = 0.001), Ascaris lumbricoides was dependent on ethnicity and age (P = 0.000 and 0.005), Taenia spp. prevalence was dependent on age and sex (P = 0.007 and 0.005), and Strongyloides stercoralis prevalence was dependent on age (P = 0.04). The prevalence in dogs depended on age and breed (χ2 ,P < 0.05) but not on sex (χ2 ,P > 0.05). Hookworms, Taenia spp and Trichuris vulpisoccurred in humans and dogs. Hookworms were the most common helminth of both humans and dogs. Individual factor (coefficient) on the effect of risk factors on specific helminths is essential in understanding the epidemiology of each helminth. Attention should be paid to control measures in man anddogs.
Paleoenvironment and Provenance Studies of Ajali Sandstone in Igbere Area, Af...theijes
The stratigraphy and sedimentology of the Ajali Sandstone successions in Igbere area,Afikpo Basin were studied in order to determine plaeoenvironmental setting and sourcemodel of the deposits. The studied deposits consist of five lithofacies namely: pebblysandstone facies, cross-bedded, laminated, bioturbated sandstone facies and mudstonefacies. Paleoenvironmental interpretation based on facies associations and sedimentarystructures revealed tide-influenced fluvial deposits, while inferences from bivariate plotsof calculated univariate parameters indicated fluvial deposits. The granulometricanalyses of the sediments indicated a predominantly moderately sorted, medium-grainedsandstone with some poorly sorted populations. The kurtosis ranged from mesokurticthrough leptokurtic to extremely leptokurtic sand populations and generally with somesymmetrical, positive and negative skewness. This result is suggestive of a sandpopulation with different tails, especially for the facies representing the poorly sortedpopulations. The sandstone in the area is essentially quartz sandstone or quartz arenitebased on petrographic analysis. The relative abundance of the framework elements (Q96, F 0 and R 4) suggests super-mature sand with a maturity index 19.0. Themineralogical and textural maturity of the sandstone therefore, indicated a polycyclicdeposit. This together with the constituent heavy minerals and paleocurrent directionsinferred that sources of detritus were from both the uplifted continental pluton and old sedimentary domain, respectively. The Crystalline Basement rocks of both theCameroon and Adamawa Highlands, the Oban Massif and western Nigeria Ilesha Spuron the one hand and the Abakaliki Anticlinorium on the other hand both satisfied suchsource models for the post-Santonian Ajali quartz–sand deposit.
Analytical Execution of Dynamic Routing Protocols For Video Conferencing Appl...theijes
In modern network communications, Routing protocols are getting an important function for the user data path that are responsible for controlling the routers to communicate together and forward packets by routers over the best trip path from a base node to a destination one. Dynamic routing protocols represented by RIP, OSPF and EIGRP are explained here for addressing various networks with different traffic environments. In this paper, the performance of these protocols are estimating with many factors like convergence activity and duration, average throughput, network end-to-end delay, Point-to-Point Utilization over the simulation based on OPNET academic version. From Simulation results, EIGRP have a fastest time convergence compared with other topologies of networks are confirmed and the OSPF has the highest Point-to-Point Utilization in the network followed by EIGRP then RIP. So, there is an attempt for finding out which protocols are suitable for the networks and from analyses to understand the role of the routing protocols in different network scenarios
Queue with Breakdowns and Interrupted Repairstheijes
This paper considers a queue, consisting of a Poisson input stream and a server. The server is subject to breakdowns. The times to failure of the server follows exponential distribution. The failed server requires repair at a facility, which has an unreliable repair crew. The repair times of the failed server follows exponential distribution, but the repair crew also subjects to breakdown when it is repairing. The times to failure of the repair crew is also assumed to be exponentially distributed. This paper obtains the steady-state performance of the queuewith server breakdowns and interrupted repairs.
Delay of Geo/Geo/1 N-limited Nonstop Forwarding Queuetheijes
Nonstop forwarding is designed to minimize packet loss when a management system fails to function. A system with non-stop forwarding can continue to forward some packets even in the event of a processor failure. We consider a Geo/Geo/1 N-limited nonstop forwarding queue. In the queueing system, when the server breaks down, up to customers can be serviced during the repair time.The delay distribution of customers is given by matrix geometric analysis.
An Analysis of the E-Agriculture Research Field Between 2005 and 2015theijes
In this study, research papers featured in the following peer reviewed journals and conference were analysed: The African Journal of Information Systems (AJIS), The African Journal of Information and Communication Techn ology (AJICT), the Electronic Journal of Information Systems in Developing Countries (EJISDC) and the IST-Africa Conference series. These papers are those which covered the e-government field, and they were released between 2005 and 2015 with the African context. The intention of the analysis was to establish research patterns characterising the publications in a period of 10 years. The results show the increase of published papers on e-agriculture over the years. Most of the papers were conducted in East Africa and were assessing the potential of available ICTs for agriculture. Moreover, the predominant scope of analysis was the country level, while descriptive research questions featured more. Furthermore, the most adopted research paradigm was the critical theory, and the knowledge contribution was the best practise. Lastly, the most adopted technology-object (of many papers) was the infrastructure, while the large per cent of recommendations were on planning.
A Case Study of Teaching the Concept of Differential in Mathematics Teacher T...theijes
In high schools of Viet Nam, teaching calculus includes the knowledge of the real function with a real variable. A mathematics educator in France, Artigue (1996) has shown that the methods and approximate techniques are the centers of the major problems (including number approximation and function approximation...) in calculus. However, in teaching mathematics in Vietnam, the problems of approximation almost do not appear. With the task of training mathematics teachers in high schools under the new orientations, we present a part of our research with the goal of improving the contents and methods of teacher training
Application of Synchronized Phasor Measurements Units in Power Systemstheijes
The last decades, electric power industry is undergoing multiple changes due to the process of deregulation, providing efficient power generation, technological innovations, and eventually lower retail prices. In this environment, dynamic phenomena in power systems have made ever more urgent the development of reliable tools for their monitoring and control. An effective tool for the close monitoring of their operation conditions is the state estimator. The traditional estimators are based on real time measurements obtained through SCADA (Supervisory Control and Data Acquisition) system. These measurements are commonly provided by the remote terminal units (RTUs) installed at the high voltage substations. The phase angle of bus voltages can not be easily measured due to technical difficulties associated with the synchronization of measurements at RTUs. Global Positioning System (GPS) alleviated these difficulties and led to the development of Phasor Measurement Units (PMUs). This weakness was eliminated with the arrival of GPS, which led to the development of Phasor Measurement Units. A PMU unit, equipped with a GPS receiver, provides high accuracy voltage and current phasor measurements with respect to a common reference phase angle. In the first part of the paper, an overview of the PMU technology and a review about the optimal allocation of PMUs in power network are presented. The most important issues regarding design and operation of PMUs are discussed and an analysis of their commercial penetration in the electric energy markets is made. The second part of the paper presents a wide range of applications related with the choice of the strategic PMU placement as well as an algorithm for finding the optimal number of PMUs needed for full observability
Review of Waste Management Approach for Producing Biomass Energy in Indiatheijes
The high volatility in fuel prices in the recent past, resulting turbulence in energy markets and the increase of the GHG has compelled many countries to look for alternate sources of energy, for both economic and environmental reasons. With growing public awareness about sanitation, and with increasing pressure on the government and urban local bodies to manage waste more efficiently, the Indian waste to energy sector is poised to grow at a rapid pace in the years to come. The dual pressing needs of waste management and reliable renewable energy source are creating attractive opportunities for investors and project developers in the waste to energy sector.
Quantum Current in Graphene Nano Scrolls Based Transistortheijes
Graphene based material application as a new centuery material are growing rappidly its carrier transport phenomenon with fast mobility have been focused resently. In the graphene family nanoscrolls because of their especial structure need to be explored. In the presented work a theoretical model for carrier transport in the arcemedus graphene nanoscrolls is reported. Graphene nanoscroll chairal dependent electrical property is considered and then schottky transistor based platform is modeled. The transport coeficient as a fundamental transport factor is discussed. The geometrical paprameter effect on the working phenomenon is considered as well.
The Ecological Protection Marxism Idealogy and Its Implications for the New R...theijes
This document discusses the ecological protection Marxism ideology and its implications for new rural construction in Vietnam. It makes the following key points:
1. Karl Marx and Friedrich Engels warned about the relationship between humans and nature, and that exploiting nature without understanding its rules can have negative consequences for both nature and human society.
2. While Vietnam has made progress in rural environmental protection, pollution remains a serious problem due to industrialization and modernization. Solutions proposed include strengthening environmental education, management, social participation, planning, technology, and international cooperation.
3. The main solutions outlined are to strengthen environmental awareness and responsibility, improve state management of the rural environment, promote social involvement in protection efforts, ensure sustainable industrial and
The Effect of Personality Traits on Social Identification, Transformational L...theijes
This study aims to establish the role model the effect of personality traits on social identification, transformational leadership and employees performance. To examine the patterns of the effect between the variables used inferential analysis tool that Software SPSS version 21.0. The results of this study indicate that, personality traits that can improve employees performance when incorporating the variables that come into play, namely social identification and transformational leadership in Provincial Government Southeast Sulawesi.
Effects of Self Compacting Concrete Using the Discrete Models as Binary & Ter...theijes
The effect of using nanosized[4],[5] pozzolanic materials [1], [12], 14] like Fly ash(FA) [3], Metakeolin (MK) [8],Silica fume(SF)[6],Rise husk ash(RHA)[14],Ground granulated blust furnace slag (GGBFS)[2] etc. as partial replacement with dry weight of Ordinary Portland Cement(OPC) to enhance the strength, durability, workability of concrete. The test results of fresh and the hardened properties of Self compacting concrete (SCC)[8],[19] incorporating pozzolanic materials at various percentage by fixing the Water to Binder (i.e. powder)ratio(w/b) of 0.45. The effects of pozzolanic materials properties of SCC were investigated by comparing the test results. Various tests [4],[5],[9] were conducted on fresh SCC like the slump flow, L-box passing ability of the SCC mixtures and T500mm slump flow time were also done. Compressive strength test [9] along with the Initial surface absorption test(ISAT) and the Capillary suction test(CST)[7] were also performed on the hardened SCC[8]
The Investigation of Primary School Students’ Ability to Identify Quadrilater...theijes
In Vietnamese mathematics curricula, primary school students explicitly learn the concept of quadrilaterals such as parallelogram, rhombus, rectangle, square and trapezoid in the Grades 3, 4 and 5. They are presented individually, and there is no comparison between their characteristics. Therefore, the students will be difficult to recognize the relationships among kinds of quadrilaterals. The results of an investigation of 186 primary school students revealed that most of them found it easy to identify squares and rectangles but many of them asserted that “a square is not a rectangle”
Measuring Robustness on Generalized Gaussian DistributionIJERA Editor
Different from previous work that measured robustness its own distribution, measuring robustness with a robust
estimator on a generalized Gaussian distribution is introduced here. In detail, an unbiased Maximum Likelihood
(ML) variance estimator and a robust variance estimator of the Gaussian distribution with a given censoring
value are applied to the generalized Gaussian distribution that represents Gaussian, Laplace, and Cauchy
distributions; then, Mean Square Error (MSE) is calculated to measure robustness. Afterward, how robustness
changes is shown because the actual distribution varies over the generalized Gaussian distribution. The results
indicate that measuring the MSE of the system can be used to point out how robust the system is when the
system distribution changes.
This document evaluates the effectiveness of the Folk and Ward graphic method for describing grain-size distributions. The authors generate theoretical grain-size distributions consisting of randomly generated 'grains' of known size, shape and density to compare graphic parameters to moment measures calculated directly from the ungrouped frequency data. They find that graphic and ungrouped means are usually similar, except for highly skewed distributions. Ungrouped standard deviations are generally larger than graphic values, especially for medium sorted samples with long tails. Skewness and kurtosis values from the two methods are only weakly related, indicating the graphic measures respond erratically to deviations from normality. The authors conclude classification schemes should only use graphic parameters if the range of values is large
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document presents a method for calculating asymptotic confidence intervals for indirect effects in structural equation models. The author first shows that under maximum likelihood estimation, the coefficient vector in a recursive structural equation model is asymptotically normally distributed. Then, using the multivariate delta method, it is shown that nonlinear functions of the coefficients, such as indirect effects, are also asymptotically normally distributed. This allows confidence intervals to be constructed for indirect effects based on their asymptotic distribution. An example is worked through to demonstrate the method.
This document summarizes a research paper that estimates the scale parameter of the Nakagami distribution using Bayesian methods. The paper derives the posterior distributions of the scale parameter under different prior distributions, including uniform, inverse exponential, and Levy priors. It then finds the Bayesian estimators of the scale parameter under three loss functions: squared error loss function, quadratic loss function, and precautionary loss function. The paper uses Monte Carlo simulations to compare the performance of the different estimators.
Estimation of parameters of linear econometric model and the power of test in...Alexander Decker
1) The document discusses using Monte Carlo simulations to estimate parameters of a linear econometric model and test the power of statistical tests in the presence of heteroscedasticity.
2) Two forms of heteroscedasticity (h(x)=X1/2 and h(x)=X) were introduced into the model. Sample sizes of 20, 50, and 100 were used with 50 replications each.
3) The bias, variance, and root mean square error of ordinary least squares (OLS) and generalized least squares (GLS) estimators were examined across sample sizes and heteroscedasticity forms. Both estimators were found to be unbiased but neither was consistently more efficient.
ALPHA LOGARITHM TRANSFORMED SEMI LOGISTIC DISTRIBUTION USING MAXIMUM LIKELIH...BRNSS Publication Hub
The document discusses the alpha logarithm transformed semi-logistic distribution and its maximum likelihood estimation method. It introduces the distribution, provides its probability density function and cumulative distribution function. It then describes generating random numbers from the distribution and outlines the maximum likelihood estimation method to estimate the distribution's unknown parameters. This involves deriving the likelihood function and taking its partial derivatives to obtain equations that are set to zero and solved to find maximum likelihood estimates of the location, scale, and shape parameters.
Method Comparison Studies of OLS-Bisector Regression and Ranged Major Axis Re...Zhaoce Liu
Linear regression is a widely used technique in many disciplines of science. When both the dependent and independent variables of the model are random, there are errors associated with the measurement of x and y variables, such models are called Errors-in-Variable models. Under this situation, Model II Regression techniques should be used for parameter estimation. We focus on the performance study of OLS-Bisector Method (Isobe et al., 1990). We compared this method to Ranged Major Axis (RMA), which is proposed by Legendre and Legendre (2012) as an alternative to Reduced Major Axis. Our simulation study addresses the coverage and width of the 95% confidence intervals. We conclude that performance of the two methods is very similar but OLS-Bisector gives a more accurate estimate than Ranged Major Axis when sample size is small.
Penalized Regressions with Different Tuning Parameter Choosing Criteria and t...CSCJournals
Recently a great deal of attention has been paid to modern regression methods such as penalized regressions which perform variable selection and coefficient estimation simultaneously, thereby providing new approaches to analyze complex data of high dimension. The choice of the tuning parameter is vital in penalized regression. In this paper, we studied the effect of different tuning parameter choosing criteria on the performances of some well-known penalization methods including ridge, lasso, and elastic net regressions. Specifically, we investigated the widely used information criteria in regression models such as Bayesian information criterion (BIC), Akaike’s information criterion (AIC), and AIC correction (AICc) in various simulation scenarios and a real data example in economic modeling. We found that predictive performance of models selected by different information criteria is heavily dependent on the properties of a data set. It is hard to find a universal best tuning parameter choosing criterion and a best penalty function for all cases. The results in this research provide reference for the choices of different criteria for tuning parameter in penalized regressions for practitioners, which also expands the nascent field of applications of penalized regressions.
This document describes a new method called Likelihood-based Sufficient Dimension Reduction (LAD) for estimating the central subspace. LAD obtains the maximum likelihood estimator of the central subspace under the assumption of conditional normality of the predictors given the response. Analytically and in simulations, LAD is shown to often perform much better than existing methods like Sliced Inverse Regression, Sliced Average Variance Estimation, and Directional Regression. LAD also inherits useful properties from maximum likelihood theory, such as the ability to estimate the dimension of the central subspace and test conditional independence hypotheses.
This thesis aims to formulate a simple measurement to evaluate and compare the predictive distributions of out-of-sample forecasts between autoregressive (AR) and vector autoregressive (VAR) models. The author conducts simulation studies to estimate AR and VAR models using Bayesian inference. A measurement is developed that uses out-of-sample forecasts and predictive distributions to evaluate the full forecast error probability distribution at different horizons. The measurement is found to accurately evaluate single forecasts and calibrate forecast models.
Introduction to financial forecasting in investment analysisSpringer
This document provides an overview of regression analysis and forecasting models. It discusses how regression analysis can be used to analyze the relationship between independent and dependent variables and make forecasts. Specifically, it explains simple linear regression, involving one independent variable, and multiple regression, involving more than one independent variable. It also covers topics such as estimating the regression line, hypothesis testing of regression coefficients, and using regression to estimate a security's beta.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Model of robust regression with parametric and nonparametric methodsAlexander Decker
This document summarizes and compares several parametric and nonparametric methods for estimating the parameters in a simple linear regression model when outliers are present in the data. It introduces ordinary least squares regression as the classical parametric method and discusses its limitations when outliers are present. It then summarizes several nonparametric and robust regression methods that are less influenced by outliers, including Theil's method, least absolute deviations regression, M-estimation, and trimmed least squares regression. The document presents the models and algorithms for these various methods. It concludes by describing a simulation study that evaluates and compares the performance of these different estimation techniques under various types and amounts of outliers.
The document discusses various methods for constructing confidence intervals for estimating multinomial proportions. It aims to analyze the propensity for aberrations (i.e. unrealistic bounds like negative values) in the interval estimates across different classical and Bayesian methods. Specifically, it provides the mathematical conditions under which each method may produce aberrant interval limits, such as zero-width intervals or bounds exceeding 0 and 1, especially for small sample counts. The document also develops an R program to facilitate computational implementation of the various methods for applied analysis of multinomial data.
Dimensionality Reduction Techniques In Response Surface Designsinventionjournals
Dimensionality reduction has enormous applications in various fields in industries. It can be applied in an optimal way with respect to time and cost related to the agricultural sciences, mechanical engineering, chemical technology, pharmaceutical sciences, clinical trials, biological studies, image processing, pattern recognitions etc. Several researchers made attempts on the reduction of the size of the model for different specific problems using some mathematical and statistical techniques identifying and eliminating some insignificant variables.This paper presents a review of the available literature on dimensionality reduction
Variance component analysis by paravayya c pujeriParavayya Pujeri
This document discusses variance component analysis and provides examples of its applications and methodology. It begins by defining key concepts such as fixed and random factors, effects, and mixed effect models. It then explains that variance component analysis partitions total variation in a dependent variable into components associated with random effects variables. The document provides examples of estimating variance components using ANOVA and examples analyzing agricultural and interlaboratory study data. It concludes that variance component analysis helps partition variation and determine where to focus efforts to reduce variance.
This study evaluated the performance of bootstrap confidence intervals for estimating slope coefficients in Model II regression with three or more variables. Simulation studies were conducted for different correlation structures between variables, sampling from both normal and lognormal distributions. The results showed that bootstrap intervals provided less than the nominal 95% coverage. Scenarios with strong relationships between variables produced better coverage, while scenarios with weaker relationships and bias produced poorer coverage, even with larger sample sizes. Future work could explore additional scenarios and alternative interval methods to improve accuracy of confidence intervals in Model II regression.
This document summarizes research on direct non-linear inversion of 1D acoustic media using the inverse scattering series. Key points:
- A method is derived for directly inverting 1D acoustic media with varying velocity and density, without requiring an estimate of properties above reflectors or assuming linear relationships between property changes and reflection data.
- Testing on a single reflector case showed improved estimates of property changes beyond the reflector compared to linear methods, for a wider range of angles.
- A special parameter related to velocity change was identified that has the correct sign in the linear inversion and is not affected by issues like "leakage" that complicate inversions.
https://utilitasmathematica.com/index.php/Index
Our Journal has recently transitioned to becoming a fully open-access journal. This transition marks a significant shift in the landscape of academic publishing, aiming to provide numerous benefits to both authors and readers. Open access has the potential to democratize knowledge, enhance research impact, and foster greater collaboration within the academic community.
Similar to Multiple Linear Regression Model with Two Parameter Doubly Truncated New Symmetric Distributed Errors (20)
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Recycled Concrete Aggregate in Construction Part II
Multiple Linear Regression Model with Two Parameter Doubly Truncated New Symmetric Distributed Errors
1. The International Journal Of Engineering And Science (IJES)
|| Volume || 6 || Issue || 3 || Pages || PP 01-08 || 2017 ||
ISSN (e): 2319 – 1813 ISSN (p): 2319 – 1805
DOI : 10.9790/1813-0603010108 www.theijes.com Page 1
Multiple Linear Regression Model with Two Parameter Doubly
Truncated New Symmetric Distributed Errors
Bisrat Misganaw Geremew1
,K.Srinivasa Rao2
1
Department of Statistics, Andhra University,Vishakhapatnam 530 003, India
2
Department of Statistics, Andhra University,Vishakhapatnam 530 003, India
--------------------------------------------------------ABSTRACT---------------------------------------------------------------
The most commonly used method to describe the relationship between response and independent variables is a
linear model with Gaussian distributed errors. In practical components, the variables examined might not be
mesokurtic and the populace values probably finitely limited. In this paper, we introduce a multiple linear
regression models with two-parameter doubly truncated new symmetric distributed (DTNSD) errors for the first
time. To estimate the model parameters we used the method of maximum likelihood (ML) and ordinary least
squares (OLS). The model desires criteria such as Akaike information criteria (AIC) and Bayesian information
criteria (BIC) for the models are used. A simulation study is performed to analysis the properties of the model
parameters. A comparative study of doubly truncated new symmetric linear regression models on the Gaussian
model showed that the proposed model gives good fit to the data sets for the error term follow DTNSD.
Keywords:Doubly truncated new symmetric distribution, Maximum likelihood method of estimate, Multiple
linear regression model, Regression model, Simulation.
--------------------------------------------------------------------------------------------------------------------------
Date of Submission: 09 February 2017 Date of Accepted: 05 March 2017
---------------------------------------------------------------------------------------------------------------------------
I. INTRODUCTION
The estimation of parameters in a regression model has got importance in many fields of studies used for
realizing practical relationships between variables. The regression model theories and applications are studied
by many authors. This method is basically grounded on statistical model wherein the error terms are assumed to
be independent and identically distributed Gaussian random variables with zero mean vector and positive
covariance matrix (Srivastava, MS, 2002). Recently, there have been an enormous kind of studies on the
influences of non-Gaussian in several linear regression analyses. Bell-shaped or roughly bell-shaped
distributions are encountered with a big form of applications and, through the Central Limit Theorem, provide
the underpinning for the characteristics of sampling distributions upon which statistical inference is primarily
based. Regardless of this application, the range of a normally distributed variate to . But in data sets
where the varaite is having finite range may not fit well to regression models with Gaussian error. This problem
has stimulated to consider truncated distributions.
The parameter estimates of a singly truncated normal distribution formulae have developed by Pearson and Lee
(1908). They used the method of moments. Different papers on this subject include encompassing Cohen
(1950), in which cases of doubly truncated and censored samples have been considered. The moment generating
function of the lower truncated multivariate normal distribution had in Tallis (1961) and, in principle; it could be
used to compute all the product moments for the lower truncated multivariate normal. Tallis (1961) provides
explicit expressions of some lower order moments for the 2n and 3n cases. Cohen (1949, 1950a, 1950b)
has studied the problem of estimating the mean and variance of normal populations from singly truncated
samples. Moreover Cohen (1961) found the tables for maximum likelihood estimators of singly truncated and
singly censored samples.
J.J. Sharples and J.C.V.Pezzey (2007) stated results of multivariate normal distributions which consider
truncation by means of a hyperplane. They presented results by calculating the expected values of the features
used in environmental modeling, when the underlying distribution is taken to be multivariate normal. They
illustrated the concept of truncated multivariate normal distributions may be employed within the environmental
sciences through an example of the economics of climate change control. Karlsson M. et al. (2009) derived an
estimator from a moment condition. The studied estimators are for semi-parametric linear regression models
with left truncated and right censored dependent variables.
The truncated mean and the truncated variance in multivariate normal distribution of arbitrary rectangular
double truncation is derived by Manjunath B.G. and Stefan Wilhelm (2012). For most comprehensive account
of the theory and applications of the truncated distributions, on can refer the books by N. Balakrishnan and A.
Clifford Cohen (1990), A. Clifford Cohen (1991), and Samuel Kotzet. al. (1993).
2. Multiple Linear Regression Model with Two Parameter Doubly Truncated New Symmetric ..
DOI : 10.9790/1813-0603010108 www.theijes.com Page 2
For some instances, even though the shape of the sample frequency curve is symmetric and bell shaped the
normal approximation may badly fit the distribution. This may be due to the peakedness of the data which might
not be mesokurtc. To resolve this trouble new symmetrical distribution was derived by Srinivasa Rao, et al.
(1997) and studied their distributional properties by A. Asrat and Srinivasa Rao (2013). Utilized it for linear
model with new symmetric distribution the range of the variate infinite having the values between to .
For instance of double truncation can be determined soundwave frequencies. Frequency of a soundwave can be
any non-negative number. However, we cannot listen all sounds. Only those sounds generated from soundwaves
with frequencies greater than 20 Hertz and less than 20, 000 Hertz are audible to human ears. Right here, the
truncation point at the left is 20 Hertz and that on the right is 20, 000 Hertz.
However, for constrained response variable the usage of such interval my fit the model badly. Hence in this
paper we study the multiple regression model with truncated new symmetric distributed errors. Very little work
has been reported in literature regarding multiple regression model follows a truncated distributions.
The linear regression model is of the form:
iikki
iii
uxx
uxy
...
'
110
byani i ),,...,2,1( (1)
where iy is an observed response variable of the regression, ikx are observed independent variables,
k ,...,, 21 are unknown regression coefficients to be estimated, and iu are independently and identically
distributed error terms. It is assumed that the error term follows a two parameter doubly truncated new
symmetric distribution.
The rest of the paper is organized as follows: In Section 2, the distributional properties of a two- parameter
doubly truncated new symmetric distribution are derived. The maximum likelihood estimation of the model
parameters is studied in Section 3. In Section 4, simulation studies are performed and the results are discussed.
Least square estimation of the model parameters are studied in Section 5. In Section 6, comparison of maximum
likelihood estimators and OLS estimators is given. In Section 7, comparison of the suggested model with that of
New Symmetric distributed errors and Gaussian model errors are presented. Section 8, the conclusions are
given.
II. PROPORTIES OF DOUBLY TRUNCATED NEW SYMMETRIC DISTRIBUTION
The doubly truncated new symmetric distribution was defined by equation (2) specifying its probability density
function. There are many properties of doubly truncated new symmetric distribution. Some of the most
important properties are:
A random variable Y follows a MDTNS distribution if its probability density function is
bya
yb
aFbF
ey
ay
yf
y
,
,0
)()(23
)/)((2
,0
)(
2
)/))((2/1(2
(2)
where
bbb
e
b
dyebF
bb y
3
1
32
1
)(
22
2/12/1
and
aaa
e
a
dyeaF
aa y
3
1
32
1
)(
22
2/12/1
the parametersa ≤ μ ≤b and 0 are location and scale parameters, respectively.
Figure 1 shows the frequency curves of the variate under study
F y
y
6 4 2 2 4 6 8
0.05
0.10
0.15
0.20
0.25
0.30
0.35
Figure 1. Frequency curves for different values of parameters σ: and as
,2,2,1,2,75.0,2 redandbluegreen for .6&4 ba
3. Multiple Linear Regression Model with Two Parameter Doubly Truncated New Symmetric ..
DOI : 10.9790/1813-0603010108 www.theijes.com Page 3
The distribution function of the random variable Y is:
y
a
y
t
Y
aFbF
ey
dte
aFbF
yF
)]()([23)]()([2
1
)(
2
2
2
1
2
1
(3)
The Mean of the variable is:
)()( yE
where
)()(
3
1
3
4
3
1
3
4
)(
22
aFbF
bbaa
(4)
If 0 and the truncation is both sides, i.e., 0)( the mean of the truncated variable is greater than
the original mean. )( is the mean of the truncated normal distribution.
The characteristic function tY of a random variable Y is:
b
a
y
ity
ity
y
dy
aFbF
ey
e
eEt )()(23
)/)((2
)(
2
)/))((2/1(2
it
b
it
b
it
a
it
a
it
a
it
bit
aFbF
e
tit
3
1
3
1
3
1
)()(
2
2
1 22
(5)
The moment generating function is:
t
b
t
b
t
a
t
a
t
a
t
bt
aFbF
e
tM
tt
3
1
3
1
3
1
)()(
)(
2
2
1 22
(6)
The cumulant generating function is:
)()(lnln
2
1
),;(ln),;( 2222
aFbFAtttMtg
(7)
The first two cumulants are
)()(
3
1
3
4
3
1
3
4
22
aFbF
bbaa
and
bbbaaa
aFbF
aFbF
bbaa
332
2
22
2
2
3
1
9
10
3
1
9
10
)()(
)()(
3
1
3
4
3
1
3
4
3
5 (8)
All higher-order cumulants are equal to zero.
4. Multiple Linear Regression Model with Two Parameter Doubly Truncated New Symmetric ..
DOI : 10.9790/1813-0603010108 www.theijes.com Page 4
III. MAXIMUM LIKELIHOOD ESTIMATION OF THE MODEL PARAMETERS
In this section, we consider the regression model IXY 2
,, with IDTNSu 2
,0~ . The unknown
parameters of the model IXY 2
,, are coefficient vector and the error variance 2
. The MLEs of and
2
are computed as: the parameter values that maximize the likelihood function:
n
i
iba
xy
ii
yI
aFbF
e
xy
bayL
ii
1
,
'
2
12
2
)()(23
'
2
),,,;(
2
n
i
xy
iin
ii
e
xy
aFbF
1
'
2
1
2
'
2)()(23
(9)
where
''
3
1'
3
'
2
1
)(
22
'
2/1
'
2/1
iii
xb
i
i
b xy
xbxbxb
e
xb
dyebF
iii
and
''
3
1'
3
'
2
1
)(
22
'
2/1
'
2/1
iii
xa
i
i
a xy
xaxaxa
e
xa
dyeaF
iii
The log-likelihood function is
n
i
n
i
iiii
n
i
b
a
xb
i
xa
i
i
xy
xyxy
e
xb
e
xa
dye
n
bayLl
i
iii
1 1
22
1
'
2/1
'
2/1
'
2
1
22
'
2
1'
2ln
3
'
3
'
2
1
lnln
2
),,,;(ln 2
22
(10)
The maximum likelihood estimators 2
ˆ,ˆˆ MLE are that which maximizes the log-likelihood function.
Taking the partial derivatives of the log of the likelihood with respect to the 11 xk vector are nxn matrix
of I2
and placing the result equal to zero will give (11). The maximum likelihood estimators are the solutions
of the equations
0
l
and 02
l
(11)
Since there are no closed form solutions to the likelihood equations, numerical methods such as Fisher Scoring
or Newton-Raphson iterative method can be used to obtain the MLEs. The standard procedure for implementing
this solution was to use Newton- Raphson iterative method we have,
',, 2)(1)()()1(
nnnn
SH (12)
Here we begin with some starting value, )0(
say, and improve it by finding some better approximation )1(
to
the required root. This procedure can be iterated to go from a current approximation )(n
to a better
approximation )1( n
.
IV. SIMULATION AND RESULTS
The proposed model was evaluated through Monte Carlo experiments in which the data is generated from model
(1) using Wolfram Mathematica 10.4. To facilitate exposition of the method of estimation, a several data set
with two explanatory variables and one dependent variable are simulated from a model with prespecified
parameters for various sample sizes ,5000,3000,1000,100n and 10000 . The dependent variable Y is
simulated using doubly truncated new symmetric distribution with mean 2 and variance 1 using the following
procedures:
Step 1: Generates the uniform random numbers )1,0(~Udi , ni ,...,2,1 ,
Step 2: Given ,1,1,2 a and 5b ; solve
5. Multiple Linear Regression Model with Two Parameter Doubly Truncated New Symmetric ..
DOI : 10.9790/1813-0603010108 www.theijes.com Page 5
i
y
a
y
t
d
aFbF
ey
dte
aFbF
)]()([23)]()([2
1
2
2
2
1
2
1
. (13)
The solution for a random variable iy , ni ,...,2,1 , will have the standard DTNS distribution.
Step 3: We then generate the 2 predictors 1X and 2X variables respectively using the simulation protocol and
use as explanatory variablesfor the regression model.
]]1,0[[Re
;]1,1[Re
2
1
nistributioLogNormalDalRandomX
ributionNormalDistalRandomX
(14)
MDTNS error regressions were applied to the simulated datasets and the estimated parameters were compared
to the true parameters. This process was repeated for sample sizes of ,5000,3000,1000,100n and 10000 . In the
first Monte Carlo experiment we generate the datasets 1X and 2X from (14) and Y from the model defined in
(1) and 914) . Keeping generality, we took the values of the model parameters to
4
3
2
2
1
0
and
1
2
(15)
The error terms generated from a doubly truncated new symmetric distribution, that is ,~ MDTNSui .
Table 1. Summary of Simulations for the Maximum Likelihood (ML) estimation of the regression model
Sample
Size
Parameter Estimate Standard
Error
Wald 95% Confidence Limits Chi-Square Pr>ChiSq
Lower Upper
100
0 1.6330 0.1612 1.3170 1.9491 102.56 <.0001
1 3.0385 0.0908 2.8606 3.2164 1120.89 <.0001
2 4.1333 0.0460 4.0432 4.2235 8079.28 <.0001
0.8938 0.0632 0.7781 1.0267
1000
0 2.0082 0.0477 1.9147 2.1017 1772.47 <.0001
1 2.9967 0.0313 2.9354 3.0579 9187.75 <.0001
2 3.9997 0.0096 3.9808 4.0185 172753 <.0001
0.9776 0.0219 0.9357 1.0214
3000
0 1.9931 0.0299 1.9346 2.0516 4458.15 <.0001
1 3.0231 0.0179 2.9880 3.0583 28448.2 <.0001
2 3.9942 0.0098 3.9750 4.0134 166318 <.0001
0.9922 0.0128 0.9674 1.0176
5000
0 1.9784 0.0228 1.93337 2.0232 7506.31 <.0001
1 3.0094 0.0140 2.9820 3.0367 46519.3 <.0001
2 4.0069 0.0069 3.9934 4.0205 335462 <.0001
0.9937 0.0099 0.9744 1.0134
10000
0 1.9792 0.0164 1.9471 2.0112 14650.0 <.0001
1 3.0138 0.0101 2.9940 3.0335 89406.3 <.0001
2 4.0022 0.0048 3.9928 4.0116 692365 <.0001
1.0080 0.0071 0.9941 1.0220
Table 1 results suggest that as the size of the sample increases, the estimates of the parameters become more
precise. Increasing the sample sizes from 100 to 1000, and then to 10, 000 observations, the estimators all move
closer to the true parameter values, and the dispersion of the estimator distributions notably decreases. The
fitted linear regression model with MDTNS error terms to the simulated data, based on 000,10n is,
21 0022.43.01381.9792ˆ XXY (16)
And their estimated standard errors are:
0.0164)ˆ.(. 0 es , 0.0101)ˆ.(. 1 es , 0.0048)ˆ.(. 2 es and 0.0071)ˆ.(. es (17)
6. Multiple Linear Regression Model with Two Parameter Doubly Truncated New Symmetric ..
DOI : 10.9790/1813-0603010108 www.theijes.com Page 6
V. LEAST SQUARES ESTIMATION OF THE MODEL PARAMETERS
The commonly used method for estimating the regression coefficients in a standard linear regression model is
the method of ordinary least squares (OLS). The OLS estimator, OLSˆ , minimizes the sum of squared residuals
(18). It can be defined by (19), where is a 11 xk parameter vector. The equation is (19) follows by
i) differentiating equation (13) with respect to , where )',,( 210 ,
ii) setting the resultant matrix equation equal to zero, and
iii) replacing by OLSˆ and rearranging for OLSˆ .
Suppose nyyy ,..., 21 are observations of a random variable y . The estimates of k ,...,, 10 are the values
which reduce
XYXYxySS
b
aii
ii
'
1
' (18)
The resulting OLS estimator of β is:
YXXX '''ˆ (19)
Provided that the inversed 1
'
XX exists.
The predicted values of the dependent variable are given by XY ˆˆ and the residuals are calculated using
YYe ˆ . (20)
Properties of the Estimates
Some of the properties of Ordinary Least squares (OLS) estimates are presented as: assuming that 0)( uE ,
,2
nIuVar and 12
')(
XXVar (21)
Table 2 presents the properties of OLS estimations using the data simulated in Section4.
Table 2. OLS estimation output for the simulation data
Equation DF model DF error SSE MSE Root MSE R-Square Adjusted R-Square
y 3 9997 10149.6 1.0153 1.0076 0.9873 0.9873
Parameter Estimate Approximate standard error t-value Approximate
β0 1.978464 0.0163 121.03 <.0001
β1 3.013995 0.0101 298.97 <.0001
β2 4.002315 0.00481 832.41 <.0001
Table 2 revealed that the Ordinary Least Squares )(OLS estimates vary significantly from the Maximum
likelihood )(ML estimates and the ML estimators are closer to the true values of the parameters compared to
the OLS estimators.
VI. COMPARSION OF MAXIMUM LIKELIHOOD AND OLS ESTIMATORS
For fitting the multiple linear regression model with two parameter doubly truncated new symmetric error terms
comparison of MLEs and LSEs were done. For every estimation methods bias and mean square error (MSE) are
computed for 000,10n where
2
ˆˆˆ biasVarMLE
The computational result is presented in Table 3.
Table 3. Comparison of the LSEs and MLEs of Doubly Truncated New Symmetric Regression Model
OLS ML
0
ˆ Bias -0.021536 -0.0208
MSE 0.000729489 0.0007016
1
ˆ
Bias 0.013995 0.0138
MSE 0.00029787 0.00029245
2
ˆ
Bias 0.002315 0.0022
MSE 2.84953E-05 2.788E-05
7. Multiple Linear Regression Model with Two Parameter Doubly Truncated New Symmetric ..
DOI : 10.9790/1813-0603010108 www.theijes.com Page 7
From Table 3, it is observed that results for comparison criteria approving that deviations from normality causes
LSEs to be poor estimators. As the results reported in Table 3 also shown that MLEs have both smaller bias and
Mean Square Error (MSE) than the Least Square (LS) estimators. The results confirmed that ML estimation
method shows better overall performance than OLS.
VII. COMPARSION OF THE MDTNS-LM WITH THE N-LM
In this paper, the performance of linear regression model with doubly truncated new symmetric distributed error
terms with that of normal error terms were examined using simulated data. To choose the best model the
Akaike’s information criteria (AIC) and the Bayesian information criteria (BIC) with model diagnostics root
mean square error (RMSE) were computed. The output of simulation studies using various sample sizes
presented in Table 4.
Table 4. Summary for information criteria and model diagnostics for normal and doubly truncated new
symmetric error model.
Model Sample Size (n) Information Criteria Model Diagnostics
AIC BIC
MDTNSD 100 -16.4545 -14.2708 0.90752
1000 -39.24445 -37.2264 0.97910
3000 -41.0639 -39.0579 0.99268
5000 -57.0270 -55.0234 0.99402
10000 -154.5126 -156.5144 0.99576
Normal 100 2.5315 4.7151 0.99789
1000 -6.0051 -3.9871 0.99551
3000 -3.7005 -1.6945 0.99888
5000 -26.9597 -24.9561 0.99701
10000 -75.2408 -73.2390 0.99610
The model with the smallest AIC or BIC amongst all competing models is deemed to be good model where it
can be seen that the MDTNS distribution provides the better fit to the data. That is, both the information criteria
techniques (AIC and BIC) and the model diagnostics (RMSE) indicate that linear model with doubly truncated
new symmetrically distributed error terms consistently performed better across all the sample sizes of the
simulation. This can also be consistently noticed from Figure 2 through Figure 4.
Figure 2. Comparison of Multiple Doubly Truncated New Symmetric (MDTNS) linear model versus Normal
Linear Model using AIC.
Figure 3. Comparison of Multiple Doubly Truncated New Symmetric (MDTNS) linear model versus Normal
Linear Model using BIC.
8. Multiple Linear Regression Model with Two Parameter Doubly Truncated New Symmetric ..
DOI : 10.9790/1813-0603010108 www.theijes.com Page 8
Figure 4. Comparison of Multiple Doubly Truncated New Symmetric (MDTNS) linear model versus Normal
Linear Model using RMSE.
Clearly, from Fig. 2 through 4, we note that the MDTNS is better than the normal.
VIII. SUMMARY AND CONCLUSIONS
In this paper, we introduced the multiple regression model with doubly truncated new symmetric distributed
errors. The doubly truncated new symmetric distribution serves as an alternative to the new symmetric
distribution. The maximum likelihood estimators of the model parameters are derived. Via simulation studies,
the properties of these estimators are studied. OLS estimation is carried out in parallel and the results are
compared. The simulated results reveal that the ML estimators are more efficient than the OLS. A comparative
study of the developed regression model, doubly truncated new symmetric linear regression model, with the
Gaussian model showed that this model gives good fit to some data sets. The properties of the maximum
likelihood estimators are studied. This regression model is much more useful for analyzing data sets raising that
reliability, lifetime data analysis, engineering, survival analysis, and a wide range of other practical problems.
This paper can be further extended to the case of non-linear regression with doubly truncated new symmetric
distributed errors which will be taken elsewhere.
REFERENCES
[1] AsratAtsedwoyen and K. Srinivasa Rao, Estimation of a linear model with two parameter symmetric platykurtic distributed errors,
Journal of uncertainty analysis and applications, 1, 2013, 1 -13.
[2] Cohen, A.C.,J, On estimating the mean and variance standard deviation of truncated normal distributions, Journal of the American
Statistical Association, 44(248), 1949, 518-525.
[3] Cohen, A.C., Estimating mean and variance of normal populations from singly truncated and doubly truncated samples, the annals
of mathematical statistics, 21(4), 1950, 557-569.
[4] Cohen, A.C., J., Tables for maximum likelihood estimates: Singly truncated and singly censored samples, Technometrics. 3(4),
1961, 535-541.
[5] Cohen, A.C., truncated and censored samples: Theory and applications,Marcel Dekker, New York. 1991.
[6] Manjunath B.G. and Stefan Wilhelm, Moments calculation for the doubly truncated multivariate normal
density,http://dx.doi.org/10.2139/ssrn.1472153, 2012.
[7] Maria Karlsson and Thomas laitila, A semi parametric regression estimator under left truncated and right censoring. Statistics and
probability letters, Elsevier, 78(16), 2009, 2567-2571.
[8] Pearson, K., Lee, A.,On the generalized probable error in multiple normal correlation, Biometrica,6(1), 1908, 59-68.
[9] Sharples. J.J. and .Pezzey, J.C.V., Expectations of linear functions with respect to truncated multinomial distributions with
applications for uncertainty analysis in environmental modelling,Environmental modelling and software, 22, 2007, 915-923.
[10] Srinivasa Rao, K., Vijay Kumar, C.V.S.R., and Lakshmi Narayana J., On a new symmetric distribution,Journal of Indian society of
agricultural statistics,50(1), 1997, 95-102.
[11] Srivastava, M.S., Methods of multivariate statistics,Wiley, New York. 2002.
[12] Tallis, G.M., The moment generating function of the truncated multi-normal distribution, Journal of the royal society, 23(1), 1961,
223-229.
[13] Venkatesh, A. and Mohankumar, S., A mathematical model for the secretion of vasopressin using fuzzy truncated normal
distribution,International journal of pure and applied mathematics, 104(1), 2015, 69-77.