A big task often faced by practitioners is in deciding the appropriate model to adopt
in fitting count datasets. This paper is aimed at investigating a suitable model for fitting
highly skewed count datasets. Among other models, COM-Poisson regression model was
proposed in this paper for fitting count data due to its varying normalizing constant. Some
statistical models were investigated along with the proposed model; these include
Poisson, Negative Binomial, Zero-Inflated, Zero-inflated Poisson and Quasi- Poisson
models. A real life dataset relating to visits to Doctor within a given period was equally
used to test the behavior of the underlying models. From the findings, it is recommended
that COM-Poisson regression model should be adopted in fitting highly skewed count
datasets irrespective of the type of dispersion.
Development modeling methods of analysis and synthesis of fingerprint deforma...IJECEIAES
The current study is to develop modeling methods, analysis and synthesis of fingerprints deformations images and their application in problems of automatic fingerprint identification. In the introduction justified urgency of the problem, is given a brief description of thematic publications. In this study will review of modern technologies of biometric technologies and methods of biometric identification, the review of fingerprint identification systems, investigate for distorting factors. The influence of deformations is singled out, the causes of deformation of fingerprints are analyzed. The review of modern ways of the account and modeling of deformations in problems of automatic fingerprint identification is given. The scientific novelty of the work is the development of information technologies for the analysis and synthesis of deformations of fingerprint images. The practical value of the work in the application of the developed methods, algorithms and information technologies in fingerprints identification systems. In addition, it has been found that our paper "devoted to research methods and synthesis of the fingerprint deformations" is a more appropriate choice than other papers.
MIXTURES OF TRAINED REGRESSION CURVESMODELS FOR HANDRITTEN ARABIC CHARACTER R...ijaia
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
This document summarizes the properties of two maximum likelihood estimators of the mean of a truncated exponential distribution. The estimators are based on either a random sample from the full exponential distribution or from the truncated exponential distribution. A simulation study with 50,000 trials evaluates the moment properties of the estimators. The results show that the estimator based on the full exponential distribution has lower variance and mean squared error compared to the estimator based on the truncated distribution, making it more efficient. The relative efficiency of the truncated estimator approaches 1 as the truncation point increases.
This document describes a framework for 2D pose estimation using active shape models and learned entropy field approximations. A dataset of manually annotated poses was created from NBA footage to train the models. Active shape models use principal component analysis to represent poses as a linear combination of modes of variation learned from the training data. To evaluate pose likelihood, image entropy is proposed as a texture similarity measure and regression is used to learn a function mapping poses to entropy fields, which can be compared to the image entropy. Current results are presented and future work to improve and speed up the approach is discussed.
Large sample property of the bayes factor in a spline semiparametric regressi...Alexander Decker
This document summarizes a research paper about investigating the large sample property of the Bayes factor for testing the polynomial component of a spline semiparametric regression model against a fully spline alternative model. It considers a semiparametric regression model where the mean function has two parts - a parametric linear component and a nonparametric penalized spline component. By representing the model as a mixed model, it obtains the closed form of the Bayes factor and proves that the Bayes factor is consistent under certain conditions on the prior and design matrix. It establishes that the Bayes factor converges to infinity under the pure polynomial model and converges to zero almost surely under the spline semiparametric alternative model.
The Method of Repeated Application of a Quadrature Formula of Trapezoids and ...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Human’s facial parts extraction to recognize facial expressionijitjournal
Real-time facial expression analysis is an important yet challenging task in human computer interaction.
This paper proposes a real-time person independent facial expression recognition system using a
geometrical feature-based approach. The face geometry is extracted using the modified active shape
model. Each part of the face geometry is effectively represented by the Census Transformation (CT) based
feature histogram. The facial expression is classified by the SVM classifier with exponential chi-square
weighted merging kernel. The proposed method was evaluated on the JAFFE database and in real-world
environment. The experimental results show that the approach yields a high recognition rate and is
applicable in real-time facial expression analysis.
Development modeling methods of analysis and synthesis of fingerprint deforma...IJECEIAES
The current study is to develop modeling methods, analysis and synthesis of fingerprints deformations images and their application in problems of automatic fingerprint identification. In the introduction justified urgency of the problem, is given a brief description of thematic publications. In this study will review of modern technologies of biometric technologies and methods of biometric identification, the review of fingerprint identification systems, investigate for distorting factors. The influence of deformations is singled out, the causes of deformation of fingerprints are analyzed. The review of modern ways of the account and modeling of deformations in problems of automatic fingerprint identification is given. The scientific novelty of the work is the development of information technologies for the analysis and synthesis of deformations of fingerprint images. The practical value of the work in the application of the developed methods, algorithms and information technologies in fingerprints identification systems. In addition, it has been found that our paper "devoted to research methods and synthesis of the fingerprint deformations" is a more appropriate choice than other papers.
MIXTURES OF TRAINED REGRESSION CURVESMODELS FOR HANDRITTEN ARABIC CHARACTER R...ijaia
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
This document summarizes the properties of two maximum likelihood estimators of the mean of a truncated exponential distribution. The estimators are based on either a random sample from the full exponential distribution or from the truncated exponential distribution. A simulation study with 50,000 trials evaluates the moment properties of the estimators. The results show that the estimator based on the full exponential distribution has lower variance and mean squared error compared to the estimator based on the truncated distribution, making it more efficient. The relative efficiency of the truncated estimator approaches 1 as the truncation point increases.
This document describes a framework for 2D pose estimation using active shape models and learned entropy field approximations. A dataset of manually annotated poses was created from NBA footage to train the models. Active shape models use principal component analysis to represent poses as a linear combination of modes of variation learned from the training data. To evaluate pose likelihood, image entropy is proposed as a texture similarity measure and regression is used to learn a function mapping poses to entropy fields, which can be compared to the image entropy. Current results are presented and future work to improve and speed up the approach is discussed.
Large sample property of the bayes factor in a spline semiparametric regressi...Alexander Decker
This document summarizes a research paper about investigating the large sample property of the Bayes factor for testing the polynomial component of a spline semiparametric regression model against a fully spline alternative model. It considers a semiparametric regression model where the mean function has two parts - a parametric linear component and a nonparametric penalized spline component. By representing the model as a mixed model, it obtains the closed form of the Bayes factor and proves that the Bayes factor is consistent under certain conditions on the prior and design matrix. It establishes that the Bayes factor converges to infinity under the pure polynomial model and converges to zero almost surely under the spline semiparametric alternative model.
The Method of Repeated Application of a Quadrature Formula of Trapezoids and ...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Human’s facial parts extraction to recognize facial expressionijitjournal
Real-time facial expression analysis is an important yet challenging task in human computer interaction.
This paper proposes a real-time person independent facial expression recognition system using a
geometrical feature-based approach. The face geometry is extracted using the modified active shape
model. Each part of the face geometry is effectively represented by the Census Transformation (CT) based
feature histogram. The facial expression is classified by the SVM classifier with exponential chi-square
weighted merging kernel. The proposed method was evaluated on the JAFFE database and in real-world
environment. The experimental results show that the approach yields a high recognition rate and is
applicable in real-time facial expression analysis.
EVALUATE THE STRAIN ENERGY ERROR FOR THE LASER WELD BY THE H-REFINEMENT OF TH...AM Publications
Currently, the finite element method (FEM) is still one of the useful tools in numerical simulation for technical problems. With this method, a continuum model presented by a certain number of elements with a simple approximation field causes the presence of discretization error in solutions. This paper considers the butt weld by laser which subjected the tension for AISI 1018 steel highness 8 mm. The aim of the study is to use the h-refinement of the FEM in estimation the strain energy error for the laser weld mentioned. The results show that the stability of the h-refinement shown by the value of the relative error of the strain energy is quite small, specifically; FEM is less than 5.7% and extra is no more than 3.7%.
INFLUENCE OF QUANTITY OF PRINCIPAL COMPONENT IN DISCRIMINATIVE FILTERINGcsandit
Discriminative filtering is a pattern recognition technique which aim maximize the energy of
output signal when a pattern is found. Looking improve the performance of filter response, was
incorporated the principal component analysis in discriminative filters design. In this work, we
investigate the influence of the quantity of principal components in the performance of
discriminative filtering applied to a facial fiducial point detection system. We show that quantity
of principal components directly affects the performance of the system, both in relation of true
and false positives rate.
A digital image forensic approach to detect whether
an image has been seam carved or not is investigated herein.
Seam carving is a content-aware image retargeting technique
which preserves the semantically important content of an image
while resizing it. The same technique, however, can be used
for malicious tampering of an image. 18 energy, seam, and
noise related features defined by Ryu [1] are produced using
Sobel’s [2] gradient filter and Rubinstein’s [3] forward energy
criterion enhanced with image gradients. An extreme gradient
boosting classifier [4] is trained to make the final decision.
Experimental results show that the proposed approach improves
the detection accuracy from 5 to 10% for seam carved images
with different scaling ratios when compared with other state-ofthe-
art methods.
SURVEY ON POLYGONAL APPROXIMATION TECHNIQUES FOR DIGITAL PLANAR CURVESZac Darcy
This document summarizes and compares three techniques for polygonal approximation of digital planar curves:
1) Masood's technique which iteratively deletes redundant points and uses a stabilization process to optimize point locations.
2) Carmona's technique which suppresses redundant points using a breakpoint suppression algorithm and threshold.
3) Tanvir's adaptive optimization algorithm which focuses on high curvature points and applies an optimization procedure.
The techniques are evaluated on standard shapes using measures like number of points, compression ratio, error, and weighted error. Masood's technique generally had lower error while Tanvir's often achieved the highest compression.
Conditional mixture model for modeling attributed dyadic dataLoc Nguyen
Dyadic data contains co-occurrences of objects, which is often modeled by finite mixture model which in turn is learned by expectation maximization (EM) algorithm. Objects in traditional dyadic data are identified by names, causing the drawback which is that it is impossible to extract implicit valuable knowledge under objects. In this research, I propose the so-called attributed dyadic data (ADD) in which each object has an informative attribute and each co-occurrence of two objects is associated with a value. ADD is flexible and covers most of structures / forms of dyadic data. Conditional mixture model (CMM), which is a variant of finite mixture model, is applied into learning ADD. Moreover, a significant feature of CMM is that any co-occurrence of two objects is based on some conditional variable. As a result, CMM can predict or estimate co-occurrent values based on regression model, which extends applications of ADD and CMM.
IMPROVING THE RELIABILITY OF DETECTION OF LSB REPLACEMENT STEGANOGRAPHYIJNSA Journal
This document proposes a method to improve the reliability of detecting LSB steganography by classifying images into those that provide accurate or inaccurate results from steganalysis methods like RSM, SPM, and LSM. The classification is based on statistical properties of the images like the cardinalities of sample pairs, which are invariant to embedding. Images where these properties are equal across all samples tend to produce inaccurate results, while those with a large number of certain sample pairs tend to be more accurate. Experimental results on testing stego images validate that the proposed classification can predict result reliability without knowledge of the cover images.
This paper deals with constructing a composite probability distribution, which can be used for
representing the failure time distribution of mixed devices, and mixed component. This distribution
is called a composite exponential Burr – type XII model, it consist of exponential density up to
certain value of threshold parameter, the second is a four parameter Burr – type XII for the rest of
model, this give better fitting for data rather than single model like exponential alone or single Burr –
type XII. The . . of this distribution has five parameters (, , , ,
), according to certain
imposed conditions on the function and on its derivatives at the threshold parameter (
), this help us
to find some mathematical relations between parameters, therefore the number of five unknown
parameters is reduced to two unknown parameters rather than five.
In this paper the probability density of this composite distribution is derived, also its
cumulative distribution function (C.D.F) also obtained. Finally the parameters (, ) are estimated
using maximum likelihood method for ordered observations.
Histogram expansion a technique of histogram equlizationeSAT Journals
Abstract
In this paper I have described histogram expansion. Histogram expansion is a technique of histogram equalization. In this I have
described three different techniques of expansion namely dynamic range expansion, linear contrast expansion and symmetric range
expansion. Each of these has their specific uses and advantages. For colored images linear contrast expansion is used. These all
methods help in easy study of histograms and helps in image enhancement.
Index Terms: Histogram expansion, Dynamic range expansion, Linear contrast expansion, Symmetric range expansion
This document discusses functions and how to determine if a relation represents a function. It defines relations and functions, and explains how to identify the domain and range of a relation. It also describes how to use the vertical line test to determine if a graph or equation defines a function. The document demonstrates using function notation to evaluate functions at given x-values and applies the function concept to an example about annual profits of a jeans company.
MIXTURES OF TRAINED REGRESSION CURVES MODELS FOR HANDWRITTEN ARABIC CHARACTER...gerogepatton
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
Analyzing high-frequency time series is increasingly useful with the current explosion in the availability of these data in several application areas, including but not limited to, climate, finance, health analytics, transportation, etc. This talk will give an overview of two statistical frameworks that could be useful for analyzing high-frequency financial time series leading to quantification of financial risk. These include a distribution free approach using penalized estimating functions for modeling inter-event durations and an approximate Bayesian approach for modeling counts of events in regular intervals. A few other potentially useful lines of research in this area will also be introduced.
Bayes estimators for the shape parameter of pareto type iAlexander Decker
This document discusses Bayesian estimators for the shape parameter of the Pareto Type I distribution under different loss functions. It begins by introducing the Pareto distribution and some classical estimators for the shape parameter, including the maximum likelihood estimator, uniformly minimum variance unbiased estimator, and minimum mean squared error estimator. It then derives the Bayesian estimators under a generalized square error loss function and quadratic loss function. Both informative priors (Exponential distribution) and non-informative priors (Jeffreys prior) are considered. The performance of the estimators is compared using Monte Carlo simulations and mean squared errors.
Bayes estimators for the shape parameter of pareto type iAlexander Decker
This document discusses Bayesian estimators for the shape parameter of the Pareto Type I distribution under different loss functions. It begins by introducing the Pareto distribution and some classical estimators for the shape parameter, including the maximum likelihood estimator, uniformly minimum variance unbiased estimator, and minimum mean squared error estimator. It then derives the Bayesian estimators under a generalized square error loss function and quadratic loss function. Both informative priors (Exponential distribution) and non-informative priors (Jeffreys prior) are considered. The performance of the estimators is compared using Monte Carlo simulations and mean squared errors.
1) The document proposes a penalized likelihood method using a penalty function like SCAD to perform simultaneous variable selection and parameter estimation in structural equation models (SEMs).
2) The method considers a general SEM where latent variables are linearly regressed on themselves with a coefficient matrix, avoiding the need to specify outcome and explanatory latent variables. Selecting nonzero coefficients in the matrix identifies the structure of the latent variable model.
3) Under regularity conditions, the consistency and oracle properties of the proposed penalized maximum likelihood estimators are established. An expectation-conditional maximization algorithm is developed for computation.
Geoid height determination is one of the major problems of geodesy because usage of satellite
techniques in geodesy isgetting increasing. Geoid heights can be determined using different methods according
to the available data. Soft computing methods such as Fuzzy logic and neural networks became so popular that
they are used to solve many engineering problems. Fuzzy logic theory and later developments in uncertainty
assessment have enabled us to develop more precise models for our requirements. In this study, How to
construct the best fuzzy model is examined. For this purpose, three different data sets were taken and two
different kinds (two inpust one output and three inputs one output) fuzzy model were formed for the calculation
of geoid heights in Istanbul (Turkey). The Fuzzy models results of these were compared with geoid heights
obtained by GPS/levelling methods. The fuzzy approximation models were tested on the test points.
Application of Semiparametric Non-Linear Model on Panel Data with Very Small ...IOSRJM
-This research work investigated the behaviour of a new semiparametric non-linear (SPNL) model on
a set of panel data with very small time point (T = 1). The SPNL model incorporates the relationship between
individual independent variable and unobserved heterogeneity variable. Five different estimation techniques
namely; Least Square (LS), Generalized Method of Moments (GMM), Continuously Updating (CU), Empirical
Likelihood (EL) and Exponential Tilting (ET) Estimators were employed for the estimation; for the purpose of
modelling the metrical response variable non-linearly on a set of independent variables. The performances of
these estimators on the SPNL model were examined for different parameters in the model using the Least
Square Error (LSE), Mean Absolute Error (MAE) and Median Absolute Error (MedAE) criteria at the lowest
time point (T = 1). The results showed that the ET estimator which provided the least errors of estimation is
relatively more efficient for the proposed model than any of the other estimators considered. It is therefore
recommended that the ET estimator should be employed to estimate the SPNL model for panel data with very
small time point.
Stability criterion of periodic oscillations in a (16)Alexander Decker
This document examines how outliers and excess zeros impact different count data models. The author simulates count data with Poisson distributions and adds outliers and excess zeros. Four models are compared: Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial. Results show that the zero-inflated negative binomial model best fits the data across sample sizes and outlier magnitudes, with the lowest dispersion indices, AIC values, and BIC values. The zero-inflated negative binomial model is thus recommended for analyzing count data with outliers or excess zeros.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
A hybrid bacterial foraging and modified particle swarm optimization for mode...IJECEIAES
This paper study the model reduction procedures used for the reduction of large-scale dynamic models into a smaller one through some sort of differential and algebraic equations. A confirmed relevance between these two models exists, and it shows same characteristics under study. These reduction procedures are generally utilized for mitigating computational complexity, facilitating system analysis, and thence reducing time and costs. This paper comes out with a study showing the impact of the consolidation between the Bacterial-Foraging (BF) and Modified particle swarm optimization (MPSO) for the reduced order model (ROM). The proposed hybrid algorithm (BF-MPSO) is comprehensively compared with the BF and MPSO algorithms; a comparison is also made with selected existing techniques.
A combined-conventional-and-differential-evolution-method-for-model-order-red...Cemal Ardil
X r3 )
(20)
The document proposes a mixed method for model order reduction of single-input single-output systems. The method combines a conventional technique using Mihailov stability criterion with a differential evolution technique. In the conventional part, the reduced denominator polynomial is derived using Mihailov stability criterion, while the numerator is obtained by matching continued fraction expansions. Then, the denominator polynomial is recalculated using differential evolution optimization to minimize integral squared error between the original and reduced models. The method is demonstrated on a numerical example and shown to produce superior results compared to using only the conventional method.
RESIDUALS AND INFLUENCE IN NONLINEAR REGRESSION FOR REPEATED MEASUREMENT DATAorajjournal
All observations don’t have equal significance in regression analysis. Diagnostics of observations is an important aspect of model building. In this paper, we use diagnostics method to detect residuals and influential points in nonlinear regression for repeated measurement data. Cook distance and Gauss newton method have been proposed to identify the outliers in nonlinear regression analysis and parameter estimation. Most of these techniques based on graphical representations of residuals, hat matrix and case deletion measures. The results
show us detection of single and multiple outliers cases in repeated measurement data. We use these techniques
to explore performance of residuals and influence in nonlinear regression model.
Penalized Regressions with Different Tuning Parameter Choosing Criteria and t...CSCJournals
Recently a great deal of attention has been paid to modern regression methods such as penalized regressions which perform variable selection and coefficient estimation simultaneously, thereby providing new approaches to analyze complex data of high dimension. The choice of the tuning parameter is vital in penalized regression. In this paper, we studied the effect of different tuning parameter choosing criteria on the performances of some well-known penalization methods including ridge, lasso, and elastic net regressions. Specifically, we investigated the widely used information criteria in regression models such as Bayesian information criterion (BIC), Akaike’s information criterion (AIC), and AIC correction (AICc) in various simulation scenarios and a real data example in economic modeling. We found that predictive performance of models selected by different information criteria is heavily dependent on the properties of a data set. It is hard to find a universal best tuning parameter choosing criterion and a best penalty function for all cases. The results in this research provide reference for the choices of different criteria for tuning parameter in penalized regressions for practitioners, which also expands the nascent field of applications of penalized regressions.
EVALUATE THE STRAIN ENERGY ERROR FOR THE LASER WELD BY THE H-REFINEMENT OF TH...AM Publications
Currently, the finite element method (FEM) is still one of the useful tools in numerical simulation for technical problems. With this method, a continuum model presented by a certain number of elements with a simple approximation field causes the presence of discretization error in solutions. This paper considers the butt weld by laser which subjected the tension for AISI 1018 steel highness 8 mm. The aim of the study is to use the h-refinement of the FEM in estimation the strain energy error for the laser weld mentioned. The results show that the stability of the h-refinement shown by the value of the relative error of the strain energy is quite small, specifically; FEM is less than 5.7% and extra is no more than 3.7%.
INFLUENCE OF QUANTITY OF PRINCIPAL COMPONENT IN DISCRIMINATIVE FILTERINGcsandit
Discriminative filtering is a pattern recognition technique which aim maximize the energy of
output signal when a pattern is found. Looking improve the performance of filter response, was
incorporated the principal component analysis in discriminative filters design. In this work, we
investigate the influence of the quantity of principal components in the performance of
discriminative filtering applied to a facial fiducial point detection system. We show that quantity
of principal components directly affects the performance of the system, both in relation of true
and false positives rate.
A digital image forensic approach to detect whether
an image has been seam carved or not is investigated herein.
Seam carving is a content-aware image retargeting technique
which preserves the semantically important content of an image
while resizing it. The same technique, however, can be used
for malicious tampering of an image. 18 energy, seam, and
noise related features defined by Ryu [1] are produced using
Sobel’s [2] gradient filter and Rubinstein’s [3] forward energy
criterion enhanced with image gradients. An extreme gradient
boosting classifier [4] is trained to make the final decision.
Experimental results show that the proposed approach improves
the detection accuracy from 5 to 10% for seam carved images
with different scaling ratios when compared with other state-ofthe-
art methods.
SURVEY ON POLYGONAL APPROXIMATION TECHNIQUES FOR DIGITAL PLANAR CURVESZac Darcy
This document summarizes and compares three techniques for polygonal approximation of digital planar curves:
1) Masood's technique which iteratively deletes redundant points and uses a stabilization process to optimize point locations.
2) Carmona's technique which suppresses redundant points using a breakpoint suppression algorithm and threshold.
3) Tanvir's adaptive optimization algorithm which focuses on high curvature points and applies an optimization procedure.
The techniques are evaluated on standard shapes using measures like number of points, compression ratio, error, and weighted error. Masood's technique generally had lower error while Tanvir's often achieved the highest compression.
Conditional mixture model for modeling attributed dyadic dataLoc Nguyen
Dyadic data contains co-occurrences of objects, which is often modeled by finite mixture model which in turn is learned by expectation maximization (EM) algorithm. Objects in traditional dyadic data are identified by names, causing the drawback which is that it is impossible to extract implicit valuable knowledge under objects. In this research, I propose the so-called attributed dyadic data (ADD) in which each object has an informative attribute and each co-occurrence of two objects is associated with a value. ADD is flexible and covers most of structures / forms of dyadic data. Conditional mixture model (CMM), which is a variant of finite mixture model, is applied into learning ADD. Moreover, a significant feature of CMM is that any co-occurrence of two objects is based on some conditional variable. As a result, CMM can predict or estimate co-occurrent values based on regression model, which extends applications of ADD and CMM.
IMPROVING THE RELIABILITY OF DETECTION OF LSB REPLACEMENT STEGANOGRAPHYIJNSA Journal
This document proposes a method to improve the reliability of detecting LSB steganography by classifying images into those that provide accurate or inaccurate results from steganalysis methods like RSM, SPM, and LSM. The classification is based on statistical properties of the images like the cardinalities of sample pairs, which are invariant to embedding. Images where these properties are equal across all samples tend to produce inaccurate results, while those with a large number of certain sample pairs tend to be more accurate. Experimental results on testing stego images validate that the proposed classification can predict result reliability without knowledge of the cover images.
This paper deals with constructing a composite probability distribution, which can be used for
representing the failure time distribution of mixed devices, and mixed component. This distribution
is called a composite exponential Burr – type XII model, it consist of exponential density up to
certain value of threshold parameter, the second is a four parameter Burr – type XII for the rest of
model, this give better fitting for data rather than single model like exponential alone or single Burr –
type XII. The . . of this distribution has five parameters (, , , ,
), according to certain
imposed conditions on the function and on its derivatives at the threshold parameter (
), this help us
to find some mathematical relations between parameters, therefore the number of five unknown
parameters is reduced to two unknown parameters rather than five.
In this paper the probability density of this composite distribution is derived, also its
cumulative distribution function (C.D.F) also obtained. Finally the parameters (, ) are estimated
using maximum likelihood method for ordered observations.
Histogram expansion a technique of histogram equlizationeSAT Journals
Abstract
In this paper I have described histogram expansion. Histogram expansion is a technique of histogram equalization. In this I have
described three different techniques of expansion namely dynamic range expansion, linear contrast expansion and symmetric range
expansion. Each of these has their specific uses and advantages. For colored images linear contrast expansion is used. These all
methods help in easy study of histograms and helps in image enhancement.
Index Terms: Histogram expansion, Dynamic range expansion, Linear contrast expansion, Symmetric range expansion
This document discusses functions and how to determine if a relation represents a function. It defines relations and functions, and explains how to identify the domain and range of a relation. It also describes how to use the vertical line test to determine if a graph or equation defines a function. The document demonstrates using function notation to evaluate functions at given x-values and applies the function concept to an example about annual profits of a jeans company.
MIXTURES OF TRAINED REGRESSION CURVES MODELS FOR HANDWRITTEN ARABIC CHARACTER...gerogepatton
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
Analyzing high-frequency time series is increasingly useful with the current explosion in the availability of these data in several application areas, including but not limited to, climate, finance, health analytics, transportation, etc. This talk will give an overview of two statistical frameworks that could be useful for analyzing high-frequency financial time series leading to quantification of financial risk. These include a distribution free approach using penalized estimating functions for modeling inter-event durations and an approximate Bayesian approach for modeling counts of events in regular intervals. A few other potentially useful lines of research in this area will also be introduced.
Bayes estimators for the shape parameter of pareto type iAlexander Decker
This document discusses Bayesian estimators for the shape parameter of the Pareto Type I distribution under different loss functions. It begins by introducing the Pareto distribution and some classical estimators for the shape parameter, including the maximum likelihood estimator, uniformly minimum variance unbiased estimator, and minimum mean squared error estimator. It then derives the Bayesian estimators under a generalized square error loss function and quadratic loss function. Both informative priors (Exponential distribution) and non-informative priors (Jeffreys prior) are considered. The performance of the estimators is compared using Monte Carlo simulations and mean squared errors.
Bayes estimators for the shape parameter of pareto type iAlexander Decker
This document discusses Bayesian estimators for the shape parameter of the Pareto Type I distribution under different loss functions. It begins by introducing the Pareto distribution and some classical estimators for the shape parameter, including the maximum likelihood estimator, uniformly minimum variance unbiased estimator, and minimum mean squared error estimator. It then derives the Bayesian estimators under a generalized square error loss function and quadratic loss function. Both informative priors (Exponential distribution) and non-informative priors (Jeffreys prior) are considered. The performance of the estimators is compared using Monte Carlo simulations and mean squared errors.
1) The document proposes a penalized likelihood method using a penalty function like SCAD to perform simultaneous variable selection and parameter estimation in structural equation models (SEMs).
2) The method considers a general SEM where latent variables are linearly regressed on themselves with a coefficient matrix, avoiding the need to specify outcome and explanatory latent variables. Selecting nonzero coefficients in the matrix identifies the structure of the latent variable model.
3) Under regularity conditions, the consistency and oracle properties of the proposed penalized maximum likelihood estimators are established. An expectation-conditional maximization algorithm is developed for computation.
Geoid height determination is one of the major problems of geodesy because usage of satellite
techniques in geodesy isgetting increasing. Geoid heights can be determined using different methods according
to the available data. Soft computing methods such as Fuzzy logic and neural networks became so popular that
they are used to solve many engineering problems. Fuzzy logic theory and later developments in uncertainty
assessment have enabled us to develop more precise models for our requirements. In this study, How to
construct the best fuzzy model is examined. For this purpose, three different data sets were taken and two
different kinds (two inpust one output and three inputs one output) fuzzy model were formed for the calculation
of geoid heights in Istanbul (Turkey). The Fuzzy models results of these were compared with geoid heights
obtained by GPS/levelling methods. The fuzzy approximation models were tested on the test points.
Application of Semiparametric Non-Linear Model on Panel Data with Very Small ...IOSRJM
-This research work investigated the behaviour of a new semiparametric non-linear (SPNL) model on
a set of panel data with very small time point (T = 1). The SPNL model incorporates the relationship between
individual independent variable and unobserved heterogeneity variable. Five different estimation techniques
namely; Least Square (LS), Generalized Method of Moments (GMM), Continuously Updating (CU), Empirical
Likelihood (EL) and Exponential Tilting (ET) Estimators were employed for the estimation; for the purpose of
modelling the metrical response variable non-linearly on a set of independent variables. The performances of
these estimators on the SPNL model were examined for different parameters in the model using the Least
Square Error (LSE), Mean Absolute Error (MAE) and Median Absolute Error (MedAE) criteria at the lowest
time point (T = 1). The results showed that the ET estimator which provided the least errors of estimation is
relatively more efficient for the proposed model than any of the other estimators considered. It is therefore
recommended that the ET estimator should be employed to estimate the SPNL model for panel data with very
small time point.
Stability criterion of periodic oscillations in a (16)Alexander Decker
This document examines how outliers and excess zeros impact different count data models. The author simulates count data with Poisson distributions and adds outliers and excess zeros. Four models are compared: Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial. Results show that the zero-inflated negative binomial model best fits the data across sample sizes and outlier magnitudes, with the lowest dispersion indices, AIC values, and BIC values. The zero-inflated negative binomial model is thus recommended for analyzing count data with outliers or excess zeros.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
A hybrid bacterial foraging and modified particle swarm optimization for mode...IJECEIAES
This paper study the model reduction procedures used for the reduction of large-scale dynamic models into a smaller one through some sort of differential and algebraic equations. A confirmed relevance between these two models exists, and it shows same characteristics under study. These reduction procedures are generally utilized for mitigating computational complexity, facilitating system analysis, and thence reducing time and costs. This paper comes out with a study showing the impact of the consolidation between the Bacterial-Foraging (BF) and Modified particle swarm optimization (MPSO) for the reduced order model (ROM). The proposed hybrid algorithm (BF-MPSO) is comprehensively compared with the BF and MPSO algorithms; a comparison is also made with selected existing techniques.
A combined-conventional-and-differential-evolution-method-for-model-order-red...Cemal Ardil
X r3 )
(20)
The document proposes a mixed method for model order reduction of single-input single-output systems. The method combines a conventional technique using Mihailov stability criterion with a differential evolution technique. In the conventional part, the reduced denominator polynomial is derived using Mihailov stability criterion, while the numerator is obtained by matching continued fraction expansions. Then, the denominator polynomial is recalculated using differential evolution optimization to minimize integral squared error between the original and reduced models. The method is demonstrated on a numerical example and shown to produce superior results compared to using only the conventional method.
RESIDUALS AND INFLUENCE IN NONLINEAR REGRESSION FOR REPEATED MEASUREMENT DATAorajjournal
All observations don’t have equal significance in regression analysis. Diagnostics of observations is an important aspect of model building. In this paper, we use diagnostics method to detect residuals and influential points in nonlinear regression for repeated measurement data. Cook distance and Gauss newton method have been proposed to identify the outliers in nonlinear regression analysis and parameter estimation. Most of these techniques based on graphical representations of residuals, hat matrix and case deletion measures. The results
show us detection of single and multiple outliers cases in repeated measurement data. We use these techniques
to explore performance of residuals and influence in nonlinear regression model.
Penalized Regressions with Different Tuning Parameter Choosing Criteria and t...CSCJournals
Recently a great deal of attention has been paid to modern regression methods such as penalized regressions which perform variable selection and coefficient estimation simultaneously, thereby providing new approaches to analyze complex data of high dimension. The choice of the tuning parameter is vital in penalized regression. In this paper, we studied the effect of different tuning parameter choosing criteria on the performances of some well-known penalization methods including ridge, lasso, and elastic net regressions. Specifically, we investigated the widely used information criteria in regression models such as Bayesian information criterion (BIC), Akaike’s information criterion (AIC), and AIC correction (AICc) in various simulation scenarios and a real data example in economic modeling. We found that predictive performance of models selected by different information criteria is heavily dependent on the properties of a data set. It is hard to find a universal best tuning parameter choosing criterion and a best penalty function for all cases. The results in this research provide reference for the choices of different criteria for tuning parameter in penalized regressions for practitioners, which also expands the nascent field of applications of penalized regressions.
Formulas for Surface Weighted Numbers on Graphijtsrd
The boundary value problem differential operator on the graph of a specific structure is discussed in this article. The graph has degree 1 vertices and edges that are linked at one common vertex. The differential operator expression with real valued potentials, the Dirichlet boundary conditions, and the conventional matching requirements define the boundary value issue. There are a finite number of eig nv lu s in this problem.The residues of the diagonal elements of the Weyl matrix in the eigenvalues are referred to as weight numbers. The ig nv lu s are monomorphic functions with simple poles.The weight numbers under consideration generalize the weight numbers of differential operators on a finite interval, which are equal to the reciprocals of the squared norms of eigenfunctions. These numbers, along with the eig nv lu s, serve as spectral data for unique operator reconstruction. The contour integration is used to obtain formulas for surfacethe weight numbers, as well as formulas for the sums in the case of superficial near ig nv lu s. On the graphs, the formulas can be utilized to analyze inverse spectral problems. Ghulam Hazrat Aimal Rasa "Formulas for Surface Weighted Numbers on Graph" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-3 , April 2022, URL: https://www.ijtsrd.com/papers/ijtsrd49573.pdf Paper URL: https://www.ijtsrd.com/mathemetics/calculus/49573/formulas-for-surface-weighted-numbers-on-graph/ghulam-hazrat-aimal-rasa
This document summarizes research on using particle swarm optimization to reconstruct microwave images of two-dimensional dielectric scatterers. It formulates the inverse scattering problem as an optimization problem to find the dielectric parameter distribution that minimizes the difference between measured and simulated scattered field data. Numerical results show that a particle swarm optimization approach can accurately reconstruct the shape and dielectric properties of a test cylindrical scatterer, with lower background reconstruction error than a genetic algorithm approach. The research demonstrates that particle swarm optimization is a suitable technique for high-dimensional microwave imaging problems.
This document discusses modeling the skewness and kurtosis of box office revenue data using the Box-Cox power exponential (BCPE) distribution within the generalized additive models for location, scale and shape (GAMLSS) framework. It finds that the BCPE distribution provides a better fit than the traditionally used Pareto–Levy–Mandelbrot distribution. The flexible four-parameter BCPE distribution allows modeling the location, scale, skewness, and kurtosis parameters of box office revenues as smooth functions of explanatory variables like opening revenues and number of screens. This overcomes limitations of previous models and provides a better understanding of box office revenues across different time periods.
The document describes the application of log-linear modeling on medical data from Akanu Ibiam Federal Polytechnic Medical Centre. Log-linear models were used to study the associations between age, sex, and blood group. Interactions between the variables were observed in the fitted models. Specifically, interactions between age and sex, sex and blood group, and age and blood group were significant at the 5% level based on tests of partial association.
An econometric model for Linear Regression using StatisticsIRJET Journal
This document discusses linear regression modeling using statistics. It begins by introducing linear regression and its assumptions. Both univariate and multivariate linear regression are covered. The coefficients are derived using statistics in matrix form. Properties of ordinary least squares estimators like their expected values and variances are proven. Hypothesis testing for multiple linear regression is presented in matrix form. The document emphasizes the importance of understanding linear regression for prediction and its application in fields like economics and social sciences. Rigorous statistical analysis is needed to ensure the validity of regression models.
SENSITIVITY ANALYSIS IN A LIDARCAMERA CALIBRATIONcscpconf
In this paper, variability analysis was performed on the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Light Detection and Ranging). Both sensors
are used to digitize urban environments. A practical and complete methodology is presented to
predict the error propagation inside the LiDAR-camera calibration. We perform a sensitivity
analysis in a local and global way. The local approach analyses the output variance with
respect to the input, only one parameter is varied at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivity indexes are calculated on the total
variation range of the input parameters. We quantify the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the noisy data of both sensors and their
calibration. We calculated the sensitivity indexes by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sensitivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
Submission Deadline: 30th September 2022
Acceptance Notification: Within Three Days’ time period
Online Publication: Within 24 Hrs. time Period
Expected Date of Dispatch of Printed Journal: 5th October 2022
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...IAEME Publication
White layer thickness (WLT) formed and surface roughness in wire electric discharge turning (WEDT) of tungsten carbide composite has been made to model through response surface methodology (RSM). A Taguchi’s standard Design of experiments involving five input variables with three levels has been employed to establish a mathematical model between input parameters and responses. Percentage of cobalt content, spindle speed, Pulse on-time, wire feed and pulse off-time were changed during the experimental tests based on the Taguchi’s orthogonal array L27 (3^13). Analysis of variance (ANOVA) revealed that the mathematical models obtained can adequately describe performance within the parameters of the factors considered. There was a good agreement between the experimental and predicted values in this study.
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSIAEME Publication
The study explores the reasons for a transgender to become entrepreneurs. In this study transgender entrepreneur was taken as independent variable and reasons to become as dependent variable. Data were collected through a structured questionnaire containing a five point Likert Scale. The study examined the data of 30 transgender entrepreneurs in Salem Municipal Corporation of Tamil Nadu State, India. Simple Random sampling technique was used. Garrett Ranking Technique (Percentile Position, Mean Scores) was used as the analysis for the present study to identify the top 13 stimulus factors for establishment of trans entrepreneurial venture. Economic advancement of a nation is governed upon the upshot of a resolute entrepreneurial doings. The conception of entrepreneurship has stretched and materialized to the socially deflated uncharted sections of transgender community. Presently transgenders have smashed their stereotypes and are making recent headlines of achievements in various fields of our Indian society. The trans-community is gradually being observed in a new light and has been trying to achieve prospective growth in entrepreneurship. The findings of the research revealed that the optimistic changes are taking place to change affirmative societal outlook of the transgender for entrepreneurial ventureship. It also laid emphasis on other transgenders to renovate their traditional living. The paper also highlights that legislators, supervisory body should endorse an impartial canons and reforms in Tamil Nadu Transgender Welfare Board Association.
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSIAEME Publication
Since ages gender difference is always a debatable theme whether caused by nature, evolution or environment. The birth of a transgender is dreadful not only for the child but also for their parents. The pain of living in the wrong physique and treated as second class victimized citizen is outrageous and fully harboured with vicious baseless negative scruples. For so long, social exclusion had perpetuated inequality and deprivation experiencing ingrained malign stigma and besieged victims of crime or violence across their life spans. They are pushed into the murky way of life with a source of eternal disgust, bereft sexual potency and perennial fear. Although they are highly visible but very little is known about them. The common public needs to comprehend the ravaged arrogance on these insensitive souls and assist in integrating them into the mainstream by offering equal opportunity, treat with humanity and respect their dignity. Entrepreneurship in the current age is endorsing the gender fairness movement. Unstable careers and economic inadequacy had inclined one of the gender variant people called Transgender to become entrepreneurs. These tiny budding entrepreneurs resulted in economic transition by means of employment, free from the clutches of stereotype jobs, raised standard of living and handful of financial empowerment. Besides all these inhibitions, they were able to witness a platform for skill set development that ignited them to enter into entrepreneurial domain. This paper epitomizes skill sets involved in trans-entrepreneurs of Thoothukudi Municipal Corporation of Tamil Nadu State and is a groundbreaking determination to sightsee various skills incorporated and the impact on entrepreneurship.
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSIAEME Publication
The banking and financial services industries are experiencing increased technology penetration. Among them, the banking industry has made technological advancements to better serve the general populace. The economy focused on transforming the banking sector's system into a cashless, paperless, and faceless one. The researcher wants to evaluate the user's intention for utilising a mobile banking application. The study also examines the variables affecting the user's behaviour intention when selecting specific applications for financial transactions. The researcher employed a well-structured questionnaire and a descriptive study methodology to gather the respondents' primary data utilising the snowball sampling technique. The study includes variables like performance expectations, effort expectations, social impact, enabling circumstances, and perceived risk. Each of the aforementioned variables has a major impact on how users utilise mobile banking applications. The outcome will assist the service provider in comprehending the user's history with mobile banking applications.
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSIAEME Publication
Technology upgradation in banking sector took the economy to view that payment mode towards online transactions using mobile applications. This system enabled connectivity between banks, Merchant and user in a convenient mode. there are various applications used for online transactions such as Google pay, Paytm, freecharge, mobikiwi, oxygen, phonepe and so on and it also includes mobile banking applications. The study aimed at evaluating the predilection of the user in adopting digital transaction. The study is descriptive in nature. The researcher used random sample techniques to collect the data. The findings reveal that mobile applications differ with the quality of service rendered by Gpay and Phonepe. The researcher suggest the Phonepe application should focus on implementing the application should be user friendly interface and Gpay on motivating the users to feel the importance of request for money and modes of payments in the application.
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOIAEME Publication
The prototype of a voice-based ATM for visually impaired using Arduino is to help people who are blind. This uses RFID cards which contain users fingerprint encrypted on it and interacts with the users through voice commands. ATM operates when sensor detects the presence of one person in the cabin. After scanning the RFID card, it will ask to select the mode like –normal or blind. User can select the respective mode through voice input, if blind mode is selected the balance check or cash withdraw can be done through voice input. Normal mode procedure is same as the existing ATM.
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IAEME Publication
There is increasing acceptability of emotional intelligence as a major factor in personality assessment and effective human resource management. Emotional intelligence as the ability to build capacity, empathize, co-operate, motivate and develop others cannot be divorced from both effective performance and human resource management systems. The human person is crucial in defining organizational leadership and fortunes in terms of challenges and opportunities and walking across both multinational and bilateral relationships. The growing complexity of the business world requires a great deal of self-confidence, integrity, communication, conflict and diversity management to keep the global enterprise within the paths of productivity and sustainability. Using the exploratory research design and 255 participants the result of this original study indicates strong positive correlation between emotional intelligence and effective human resource management. The paper offers suggestions on further studies between emotional intelligence and human capital development and recommends for conflict management as an integral part of effective human resource management.
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYIAEME Publication
Our life journey, in general, is closely defined by the way we understand the meaning of why we coexist and deal with its challenges. As we develop the "inspiration economy", we could say that nearly all of the challenges we have faced are opportunities that help us to discover the rest of our journey. In this note paper, we explore how being faced with the opportunity of being a close carer for an aging parent with dementia brought intangible discoveries that changed our insight of the meaning of the rest of our life journey.
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...IAEME Publication
The main objective of this study is to analyze the impact of aspects of Organizational Culture on the Effectiveness of the Performance Management System (PMS) in the Health Care Organization at Thanjavur. Organizational Culture and PMS play a crucial role in present-day organizations in achieving their objectives. PMS needs employees’ cooperation to achieve its intended objectives. Employees' cooperation depends upon the organization’s culture. The present study uses exploratory research to examine the relationship between the Organization's culture and the Effectiveness of the Performance Management System. The study uses a Structured Questionnaire to collect the primary data. For this study, Thirty-six non-clinical employees were selected from twelve randomly selected Health Care organizations at Thanjavur. Thirty-two fully completed questionnaires were received.
Living in 21st century in itself reminds all of us the necessity of police and its administration. As more and more we are entering into the modern society and culture, the more we require the services of the so called ‘Khaki Worthy’ men i.e., the police personnel. Whether we talk of Indian police or the other nation’s police, they all have the same recognition as they have in India. But as already mentioned, their services and requirements are different after the like 26th November, 2008 incidents, where they without saving their own lives has sacrificed themselves without any hitch and without caring about their respective family members and wards. In other words, they are like our heroes and mentors who can guide us from the darkness of fear, militancy, corruption and other dark sides of life and so on. Now the question arises, if Gandhi would have been alive today, what would have been his reaction/opinion to the police and its functioning? Would he have some thing different in his mind now what he had been in his mind before the partition or would he be going to start some Satyagraha in the form of some improvement in the functioning of the police administration? Really these questions or rather night mares can come to any one’s mind, when there is too much confusion is prevailing in our minds, when there is too much corruption in the society and when the polices working is also in the questioning because of one or the other case throughout the India. It is matter of great concern that we have to thing over our administration and our practical approach because the police personals are also like us, they are part and parcel of our society and among one of us, so why we all are pin pointing towards them.
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...IAEME Publication
The goal of this study was to see how talent management affected employee retention in the selected IT organizations in Chennai. The fundamental issue was the difficulty to attract, hire, and retain talented personnel who perform well and the gap between supply and demand of talent acquisition and retaining them within the firms. The study's main goals were to determine the impact of talent management on employee retention in IT companies in Chennai, investigate talent management strategies that IT companies could use to improve talent acquisition, performance management, career planning and formulate retention strategies that the IT firms could use. The respondents were given a structured close-ended questionnaire with the 5 Point Likert Scale as part of the study's quantitative research design. The target population consisted of 289 IT professionals. The questionnaires were distributed and collected by the researcher directly. The Statistical Package for Social Sciences (SPSS) was used to collect and analyse the questionnaire responses. Hypotheses that were formulated for the various areas of the study were tested using a variety of statistical tests. The key findings of the study suggested that talent management had an impact on employee retention. The studies also found that there is a clear link between the implementation of talent management and retention measures. Management should provide enough training and development for employees, clarify job responsibilities, provide adequate remuneration packages, and recognise employees for exceptional performance.
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...IAEME Publication
Globally, Millions of dollars were spent by the organizations for employing skilled Information Technology (IT) professionals. It is costly to replace unskilled employees with IT professionals possessing technical skills and competencies that aid in interconnecting the business processes. The organization’s employment tactics were forced to alter by globalization along with technological innovations as they consistently diminish to remain lean, outsource to concentrate on core competencies along with restructuring/reallocate personnel to gather efficiency. As other jobs, organizations or professions have become reasonably more appropriate in a shifting employment landscape, the above alterations trigger both involuntary as well as voluntary turnover. The employee view on jobs is also afflicted by the COVID-19 pandemic along with the employee-driven labour market. So, having effective strategies is necessary to tackle the withdrawal rate of employees. By associating Emotional Intelligence (EI) along with Talent Management (TM) in the IT industry, the rise in attrition rate was analyzed in this study. Only 303 respondents were collected out of 350 participants to whom questionnaires were distributed. From the employees of IT organizations located in Bangalore (India), the data were congregated. A simple random sampling methodology was employed to congregate data as of the respondents. Generating the hypothesis along with testing is eventuated. The effect of EI and TM along with regression analysis between TM and EI was analyzed. The outcomes indicated that employee and Organizational Performance (OP) were elevated by effective EI along with TM.
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...IAEME Publication
By implementing talent management strategy, organizations would have the option to retain their skilled professionals while additionally working on their overall performance. It is the course of appropriately utilizing the ideal individuals, setting them up for future top positions, exploring and dealing with their performance, and holding them back from leaving the organization. It is employee performance that determines the success of every organization. The firm quickly obtains an upper hand over its rivals in the event that its employees having particular skills that cannot be duplicated by the competitors. Thus, firms are centred on creating successful talent management practices and processes to deal with the unique human resources. Firms are additionally endeavouring to keep their top/key staff since on the off chance that they leave; the whole store of information leaves the firm's hands. The study's objective was to determine the impact of talent management on organizational performance among the selected IT organizations in Chennai. The study recommends that talent management limitedly affects performance. On the off chance that this talent is appropriately management and implemented properly, organizations might benefit as much as possible from their maintained assets to support development and productivity, both monetarily and non-monetarily.
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...IAEME Publication
Banking regulations act of India, 1949 defines banking as “acceptance of deposits for the purpose of lending or investment from the public, repayment on demand or otherwise and withdrawable through cheques, drafts order or otherwise”, the major participants of the Indian financial system are commercial banks, the financial institution encompassing term lending institutions. Investments institutions, specialized financial institution and the state level development banks, non banking financial companies (NBFC) and other market intermediaries such has the stock brokers and money lenders are among the oldest of the certain variants of NBFC and the oldest market participants. The asset quality of banks is one of the most important indicators of their financial health. The Indian banking sector has been facing severe problems of increasing Non- Performing Assets (NPAs). The NPAs growth directly and indirectly affects the quality of assets and profitability of banks. It also shows the efficiency of banks credit risk management and the recovery effectiveness. NPA do not generate any income, whereas, the bank is required to make provisions for such as assets that why is a double edge weapon. This paper outlines the concept of quality of bank loans of different types like Housing, Agriculture and MSME loans in state Haryana of selected public and private sector banks. This study is highlighting problems associated with the role of commercial bank in financing Small and Medium Scale Enterprises (SME). The overall objective of the research was to assess the effect of the financing provisions existing for the setting up and operations of MSMEs in the country and to generate recommendations for more robust financing mechanisms for successful operation of the MSMEs, in turn understanding the impact of MSME loans on financial institutions due to NPA. There are many research conducted on the topic of Non- Performing Assets (NPA) Management, concerning particular bank, comparative study of public and private banks etc. In this paper the researcher is considering the aggregate data of selected public sector and private sector banks and attempts to compare the NPA of Housing, Agriculture and MSME loans in state Haryana of public and private sector banks. The tools used in the study are average and Anova test and variance. The findings reveal that NPA is common problem for both public and private sector banks and is associated with all types of loans either that is housing loans, agriculture loans and loans to SMES. NPAs of both public and private sector banks show the increasing trend. In 2010-11 GNPA of public and private sector were at same level it was 2% but after 2010-11 it increased in many fold and at present there is GNPA in some more than 15%. It shows the dark area of Indian banking sector.
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...IAEME Publication
An experiment conducted in this study found that BaSO4 changed Nylon 6's mechanical properties. By changing the weight ratios, BaSO4 was used to make Nylon 6. This Researcher looked into how hard Nylon-6/BaSO4 composites are and how well they wear. Experiments were done based on Taguchi design L9. Nylon-6/BaSO4 composites can be tested for their hardness number using a Rockwell hardness testing apparatus. On Nylon/BaSO4, the wear behavior was measured by a wear monitor, pinon-disc friction by varying reinforcement, sliding speed, and sliding distance, and the microstructure of the crack surfaces was observed by SEM. This study provides significant contributions to ultimate strength by increasing BaSO4 content up to 16% in the composites, and sliding speed contributes 72.45% to the wear rate
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...IAEME Publication
The majority of the population in India lives in villages. The village is the back bone of the country. Village or rural industries play an important role in the national economy, particularly in the rural development. Developing the rural economy is one of the key indicators towards a country’s success. Whether it be the need to look after the welfare of the farmers or invest in rural infrastructure, Governments have to ensure that rural development isn’t compromised. The economic development of our country largely depends on the progress of rural areas and the standard of living of rural masses. Village or rural industries play an important role in the national economy, particularly in the rural development. Rural entrepreneurship is based on stimulating local entrepreneurial talent and the subsequent growth of indigenous enterprises. It recognizes opportunity in the rural areas and accelerates a unique blend of resources either inside or outside of agriculture. Rural entrepreneurship brings an economic value to the rural sector by creating new methods of production, new markets, new products and generate employment opportunities thereby ensuring continuous rural development. Social Entrepreneurship has the direct and primary objective of serving the society along with the earning profits. So, social entrepreneurship is different from the economic entrepreneurship as its basic objective is not to earn profits but for providing innovative solutions to meet the society needs which are not taken care by majority of the entrepreneurs as they are in the business for profit making as a sole objective. So, the Social Entrepreneurs have the huge growth potential particularly in the developing countries like India where we have huge societal disparities in terms of the financial positions of the population. Still 22 percent of the Indian population is below the poverty line and also there is disparity among the rural & urban population in terms of families living under BPL. 25.7 percent of the rural population & 13.7 percent of the urban population is under BPL which clearly shows the disparity of the poor people in the rural and urban areas. The need to develop social entrepreneurship in agriculture is dictated by a large number of social problems. Such problems include low living standards, unemployment, and social tension. The reasons that led to the emergence of the practice of social entrepreneurship are the above factors. The research problem lays upon disclosing the importance of role of social entrepreneurship in rural development of India. The paper the tendencies of social entrepreneurship in India, to present successful examples of such business for providing recommendations how to improve situation in rural areas in terms of social entrepreneurship development. Indian government has made some steps towards development of social enterprises, social entrepreneurship, and social in- novation, but a lot remains to be improved.
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...IAEME Publication
Distribution system is a critical link between the electric power distributor and the consumers. Most of the distribution networks commonly used by the electric utility is the radial distribution network. However in this type of network, it has technical issues such as enormous power losses which affect the quality of the supply. Nowadays, the introduction of Distributed Generation (DG) units in the system help improve and support the voltage profile of the network as well as the performance of the system components through power loss mitigation. In this study network reconfiguration was done using two meta-heuristic algorithms Particle Swarm Optimization and Gravitational Search Algorithm (PSO-GSA) to enhance power quality and voltage profile in the system when simultaneously applied with the DG units. Backward/Forward Sweep Method was used in the load flow analysis and simulated using the MATLAB program. Five cases were considered in the Reconfiguration based on the contribution of DG units. The proposed method was tested using IEEE 33 bus system. Based on the results, there was a voltage profile improvement in the system from 0.9038 p.u. to 0.9594 p.u.. The integration of DG in the network also reduced power losses from 210.98 kW to 69.3963 kW. Simulated results are drawn to show the performance of each case.
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...IAEME Publication
Manufacturing industries have witnessed an outburst in productivity. For productivity improvement manufacturing industries are taking various initiatives by using lean tools and techniques. However, in different manufacturing industries, frugal approach is applied in product design and services as a tool for improvement. Frugal approach contributed to prove less is more and seems indirectly contributing to improve productivity. Hence, there is need to understand status of frugal approach application in manufacturing industries. All manufacturing industries are trying hard and putting continuous efforts for competitive existence. For productivity improvements, manufacturing industries are coming up with different effective and efficient solutions in manufacturing processes and operations. To overcome current challenges, manufacturing industries have started using frugal approach in product design and services. For this study, methodology adopted with both primary and secondary sources of data. For primary source interview and observation technique is used and for secondary source review has done based on available literatures in website, printed magazines, manual etc. An attempt has made for understanding application of frugal approach with the study of manufacturing industry project. Manufacturing industry selected for this project study is Mahindra and Mahindra Ltd. This paper will help researcher to find the connections between the two concepts productivity improvement and frugal approach. This paper will help to understand significance of frugal approach for productivity improvement in manufacturing industry. This will also help to understand current scenario of frugal approach in manufacturing industry. In manufacturing industries various process are involved to deliver the final product. In the process of converting input in to output through manufacturing process productivity plays very critical role. Hence this study will help to evolve status of frugal approach in productivity improvement programme. The notion of frugal can be viewed as an approach towards productivity improvement in manufacturing industries.
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTIAEME Publication
In this paper, we investigated a queuing model of fuzzy environment-based a multiple channel queuing model (M/M/C) ( /FCFS) and study its performance under realistic conditions. It applies a nonagonal fuzzy number to analyse the relevant performance of a multiple channel queuing model (M/M/C) ( /FCFS). Based on the sub interval average ranking method for nonagonal fuzzy number, we convert fuzzy number to crisp one. Numerical results reveal that the efficiency of this method. Intuitively, the fuzzy environment adapts well to a multiple channel queuing models (M/M/C) ( /FCFS) are very well.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
2. Remi J. Dare, Olumide S. Adesina, Pelumi E. Oguntunde and Olasunmbo O. Agboola
http://www.iaeme.com/IJMET/index.asp 1965 editor@iaeme.com
preferred in fitting count datasets irrespective of the form of dispersion. Famoye (1993) proposed
a restricted Generalized Poisson regression model for handling dispersed count datasets. The
model is considered to be an extension of the family of Generalized Poisson Distribution (GPD)
since the latter was found inadequate to fit count datasets effectively. Resmi et al., (2013) has
established cases of biased and misleading results associated with highly skewed distributions
with excess zeros. An instance of a clinical data involving children with electrophysiological
disorders in which many of them were treated without surgery was given.
A Poisson experiment has to do with number of occurrences of an event in a given time
interval or a specific location. Poisson distribution is one of the simplest and perhaps most
frequently used probability distributions in modeling the time instance at which event occurs.
Authors who applied Poisson regression to model count datasets include; Yip (1988),
Romundstad et al., (2001), Winkelmann (2004), Heller et al., (2007) and Gagnon et al., (2008),
to mention but a few. It is assumed that the mean of Poisson distribution is equal to the variance,
and this makes Poisson regression model inadequate to model datasets that exhibit other form of
dispersion aside equi-dispersion, also, Poisson regression is known to be restrictive intrinsically
heteroskedastic.
The Negative Binomial regression model shows superiority over Poisson regression model
because it has extra parameter to make it suitable for modeling over-dispersion. Derivation of
Negative Binomial regression follows Bayesian principle (Cameron & Trivedi, 2005). Readers
can refer to Hilbe (2007) or detailed discussion on negative binomial regression model.
COM-Poisson regression operates with link function for any dependent variable. Among
others who took advantage of COM-Poisson model are Ridout & Besbeas (2004) and Kalyanam
et al., (2007). Cameron et al., (1988), Pohlmeier & Ulrich (1995), Grootendorst (1995) and Geil
et al., (1996) considered Doctor visits, Special visits, Drug prescriptions, number of hospital stays
and Hospitalizations respectively, Sellers & Shmeul (2010) and Shmueli et al., (2005) gave
robust applications of COM-Poisson regression model.
Cameron & Trivedi (2005) pointed out that conventional parametric count distributions; the
poisson and negative binomial models, do not often satisfy description of empirical distributions
at some level of dispersion. A major challenge here is that they cannot model variance-to-mean
ratio below 1 which is typically common with most count datasets. Therefore, COM-Poisson
regression is identified suitable for count datasets irrespective of the form of dispersion the data
exhibits. In order to fit COM-Poisson regression to a dataset, it is required that the normalizing
constant be known, which follows its estimation. This study is aimed at discussing COM-Poisson
regression model extensively and to carry out model comparison among various models. In
section 2 of this paper, the materials and methods used are discussed while results are presented
in section 3.
2. MATERIALS AND METHOD
Some basic regression models for fitting count datasets are itemized and discussed as follows:
2.1. Poisson Regression
For a Poisson model, the mean iµ is expressed in terms of explanatory variables x using a suitable
link function. The model can be described as:
( )ig xµ β′=
(1)
The link for ( )ig µ may be identity link i xµ β′= or log linklog( )i xµ β′= . Log link
ˆˆ exp( )i xµ β′= is expected to be positive, but not compulsory in the case of identity link. The
mean and variance of a Poisson regression are:
3. Adaptive Regression Model for Highly Skewed Count Data
http://www.iaeme.com/IJMET/index.asp 1966 editor@iaeme.com
( ) exp( )iE Y x β′=
(2)
And
( ) exp( )iV Y x β′= (3)
Respectively, its likelihood function can be given by:
1
( ) exp( ) )
n
i i i i
i
InL y x x Inyβ β β
=
′ ′= − −∑
(4)
2.2. Negative Binomial Regression
For parameters µ andδ , Negative Binomial regression can be expressed as Bayesian procedure
as:
1
0
( )
( | , )
! ( )
v y v
e v v e
h y dv
y
µ δ δ δ
µ δ
µ δ
δ
∞ − − −
= ⋅
Γ∫
(5)
Further simplification gives:
( ) 1
0
( | , )
! ( )
v y y
e v
h y dv
y
µ δ δ δ
µ δ
µ δ
δ
∞ − + + −
= ⋅
Γ∫ (6)
The model’s mean and variance can be expressed as:
( ) exp( )iE Y x β′=
(7)
And
2
( ) exp( ) exp( )iV Y x xβ δ β′ ′= + (8)
Respectively. The likelihood function of Negative Binomial model is given as:
( ) ( 1) ( ) ( )
( ) ( ) ( ) ( )
i im y m y
i i i i
i
i i i i i i
y m y m k km m
l
m y m m m y m m
µ µ
µ µ µ µ
Γ + + − Γ
= =
Γ Γ + + Γ Γ + +
L
*
* * *
* *
( 1) ( ) ( 1)
( 1)! ( 1)!
k
i im y e y
m k m
i i i i
i m m
i i i i i i
y m m y e em e
l
y m m y e e
µ µ
µ µ µ µ
+ − + −
= =
− + + − + +
L L
(9)
Where m is the extra parameter responsible for taking care of over-dispersion in count data?
The log-likelihood function represented by L is:
[ ]
1
* * * * *
0
ln( ) ln ( 1)! ln( ) ln( ) ( )ln( )
iy
m m m m m
i i i i i
j
L e j y e e y e y eµ µ
−
=
= + − − + + − + +∑
(10)
2.3. Zero Inflated Poisson (ZIP) Regression Models
The count variable of interest can be represented by ‘Y’ which is described by:
(1 )exp( ), 0
( )
(1 )exp( ) / !, 0ij
ij ij ij ij
yij ij
ij ij ij ij ij
y
P Y y
y y
ω ω λ
ω λ λ
+ − − =
= =
− − > (11)
The ZIP mean and variance are:
4. Remi J. Dare, Olumide S. Adesina, Pelumi E. Oguntunde and Olasunmbo O. Agboola
http://www.iaeme.com/IJMET/index.asp 1967 editor@iaeme.com
( ) (1 )ij ij ijE Y ω λ= −
(12)
And
var( ) (1 ) (1 )ij ij ij ij ijY ω λ ω λ= − +
(13)
Respectively. Yau et al., (2003) made a submission that, in regression analysis, both the mean
ijλ and zero proportion ijω parameters are linked to covariate vectors ijx and zij respectively.
The ZIP mixed regression model can be expressed as:
log( )ij ij ij ix uη λ β′= = +
(14)
log
1
ij
ij ij i
ij
z v
ω
ξ γ
ω
′= = + −
(15)
Form Equation (15), we have:
exp( )
1 exp( )
ij i
ij
ij i
z v
z v
γ
ω
γ
′ +
=
′+ +
(16)
Therefore, the likelihood function for ZIP model is:
1
exp( ) exp( )
( , ) 1 exp( )
1 exp( ) 1 exp( )
n
ij i ij i
ij
i ij i ij i
z v z v
L y
z v z v
γ γ
λ λ
γ γ=
′ ′+ +
= + − − ′ ′+ + + +
∏
(17)
For 0y = or;
1
exp( )
( , ) 1 exp( ) !
1 exp( )
n
ij i ij
ij ij ij
i ij i
z v
L y y
z v
γ
λ λ λ
γ=
′ +
= − − ′+ +
∏
(18)
For 0y >
Following matrix notation for ZINB,
[ ]′
= mmnmn xxxxX ,,,,,, 1111 1
KKK ,
[ ]mnnndiag 111W ,, 21
K= [ ]′
= mmnmn wwww ,,,,,, 1111 1
KKK ,
[ ]′
= mmnmn zzzzZ ,,,,,, 1111 1
KKK ,
The mixed regression model can be expressed as:
log
1
v
ω
ξ γ
ω
= = +
−
Z W
(19)
log( ) uλ η β= = +X W
2.4. COM-Poisson distribution
The work of Conway & Maxwell (1962) proposed the COM-Poisson distribution for the first
time as a way of handling queuing systems. The density function can be described as:
1
( | , )
( !) ( , )
y
P Y y
y Zν
λ
λ ν
λ ν
= = ⋅ ; 0,1,2.....y = (20)
5. Adaptive Regression Model for Highly Skewed Count Data
http://www.iaeme.com/IJMET/index.asp 1968 editor@iaeme.com
0
( , )
( !)
y
j
Z
y ν
λ
λ ν
∞
=
= ∑
For 0λ > and 0ν ≥ . ( , )Z λ ν is the normalization constant, ( , )Z λ ν is observed not to have
a closed analytical form. According to Shmueli (2005), the approximations for COM-Poisson
parameter, v is such that there is accuracy 1ν ≤ or 10ν
λ > . The moment generating function
(mgf) of COM-Poisson is given as:
( ) ( ) ( , ) ( , )Yt t
YM t E e Z e Zλ ν λ ν= = (21)
While the probability generating function (pgf) is expressed as:
( ) ( , ) ( , )Y
E t Z t Zλ ν λ ν= (22)
COM-Poisson distribution is a member of an exponential family in both parameters with
sufficient statistic 1
1
n
i
i
S Y
=
= ∑ and 2
1
log( !)
n
i
i
S Y
=
= ∑ , where ,...........,i nY Y represents a random
sample of n COM-Poisson random variables.
Parameter Estimation of COM-Poisson distribution via maximum likelihood
Approach
This method of estimation takes on the maximum likelihood approach. This approach is made
possible due to its exponential family structure. The log-likelihood function can be written as:
1
1
1
log ( ,.......... | , ) ( , )
( !)
i
n
y
ni
n n
i
i
L y y Z
y ν
λ
λ ν λ υ −=
=
= ⋅
∏
∏
(23)
Further workings showed that the left hand side equals 1 2log log ( , )S S n Zλ ν λ υ− − , where
1
1
n
i
i
S Y
=
= ∑ and 1
1
log( !)
n
i
i
S Y
=
= ∑ . Therefore,
1 1 2log ( ,.......... | , ) exp( ) ( , ) n
nL y y S S Zλ ν λ ν λ υ −
= ⋅ − ⋅ (24)
COM-Poisson distribution can be expressed as:
( | ) ( ) ( ) ( ) ( )
k
j j
j
L y y t yθ γ θ φ π θ
= ⋅
∑
(25)
Which qualifies the distribution to be listed among the exponential class of family.
In order to obtain the maximum likelihood, sets of normal equations would be solved
iteratively. Evaluating the normalizing constant, the series
( !)
y
v
y
λ
converges for any 0λ > , 0v >
, the ratio of the two subsequent terms of the series v
y
λ
tends to 0 as j→∞ . To make any
computation on COM-Poisson probabilities, the normalizing constant ( , )Z vλ has to be derived.
The link function for COM-Poisson model is given as:
log( )i xµ β′=
(26)
6. Remi J. Dare, Olumide S. Adesina, Pelumi E. Oguntunde and Olasunmbo O. Agboola
http://www.iaeme.com/IJMET/index.asp 1969 editor@iaeme.com
log( )i x cµ ′= −
Where β and c are the regression coefficients. The mean and variance are as follows:
( ) exp( )iE Y x β′
(27)
And
( ) exp( )i i iV Y x x cβ′ ′+
(28)
Respectively.
3. APPLICATION
To identify the strength of the outlined models, dataset “mdvis” was obtained from “package
COUNT” in R. The data is made up of German Socio-Economic Panel data with two thousand
two hundred and twenty seven (2,227) observations. The response variable examined was the
number of visits (numvisit); that is, number of patients’ visits to a doctor for a three month period.
Predictor variables include; Reform (interview year post-reform: 1998=1; pre-reform:1996=0),
badh (not bad health=0, bad health=1), indicating the state of health, Agegrp (20-39=1; 40-49=2;
50-60=3) and educ (education level): (1=7-10; 2=10.5-12; 3=HSgrad+). The mean =2.589 and
variance=16.129 indicating that the data is over-dispersed. The result of the model selection is
presented in Table 1.
Table1: Model selection for over-dispersed count data
Reform Badh educ agegrp AIC BIC Deviance
Poisson -.13789 1.13059 -0.0289 0.07453 11913.00 11941l.87 7437.8
NB -0.1354 1.13117 -0.0088 0.09016 9141.70 9175.98 0.0901**
ZIP 0.16595 -0.9994 -0.2302 0.03926 10816.30 - -
ZINB 0.9037 -2.8589 -4.5675 -0.1442 9143.40 18813.22 -
QuasiPois -0.1378 1.13059 -0.0289 0.07453 74371.80 NA -
CMP -0.1378 1.13059 -0.0289 0.07453 9101.30* 9941.87 -
The plot showing the number of visit to the doctor is presented in Figure 1 while the Normal
Q-Q plot is presented in Figure 2.
Figure 1: Histogram showing frequency of visits
7. Adaptive Regression Model for Highly Skewed Count Data
http://www.iaeme.com/IJMET/index.asp 1970 editor@iaeme.com
Figure 2: Q-Q Plots for number of visits to doctor
The model coefficients/parameters and their corresponding confidence intervals are presented
in Table 2.
Table 2: Coefficient and Confidence Interval from the model
CO 2.5 pct 97.5 pct
count_(Intercept) 2.0958295 1.8928566 0.9177257
count_reform 0.8711983 0.8269712 3.2834258
count_badh 3.0974853 2.9208505 0.9177257
count_educ 0.9715042 0.9387839 1.0053716
count_agegrp 1.0773779 1.0413875 1.1144003
4. DISCUSSIONS AND CONCLUSION
From Table 1, after fitting the selected models to the over-dispersed count data and comparing
them, the result shows that COM-Poisson and Zero-inflated models (in that order) are superior to
the other models based on Akaike and Bayesian Information Criteria (AIC and BIC). These
results suggest that, fitting with Poisson, Negative binomial and Quasi-Poisson will likely lead to
misleading inferences about the parameters.
Having established that significant relationship exists between each predictor and the
response variable, an exponentially generated coefficient was carried out as shown in Table 2,
the table gives the extent to which each predictor impart on the response variable. Coefficient for
“reform” is less than 1; therefore, for every increase in the number of patients in post-reform
period, there is a decrease in the number of visits to the doctor by a factor of 0.8711. Also, as the
number of people with bad health status increases, the number of visits to doctor increases by a
factor of 3.0974; bad health status must have informed the need to see their doctors.
Education has a coefficient less than 1; therefore, increase in education level leads to decrease
in the number of visits to doctor by a factor of 0.9715. This might be due to the fact that education
brings about exposure and a need to be conscious of one’s health; this can however bring about
reduction in visits to doctor. Lastly, as age increases, the number of visits to doctor increases by
a factor of 1.0773; this relate to the fact that as an individual increases in age, there is possibility
of having more health issues. This is in line with the study of Christensen et al., (2009) who
identified health related issues among older people than younger individuals.
8. Remi J. Dare, Olumide S. Adesina, Pelumi E. Oguntunde and Olasunmbo O. Agboola
http://www.iaeme.com/IJMET/index.asp 1971 editor@iaeme.com
In this study, the suitability of COM-Poisson model has been established over some other
existing parametric models in analyzing count datasets with high dispersion. A real-life data set
was applied to check the performances of the outlined models. COM-Poisson model has proven
to be suitable to model highly skewed over-dispersed count data, and therefore recommended.
Future studies can consider fitting the models to under-dispersed datasets using both frequentist
and Bayesian technique. Estimation of parameters can also be done following the work of Rastogi
and Oguntunde (2018).
ACKNOWLEDGEMENT
The authors would like to thank Covenant University for her supports.
REFERENCES
[1] Adesina O. S., Olatayo T. O., Agboola O. O., Oguntunde P. E. (2018). Bayesian Dirichet
Process Mixture Prior for Count Data, International Journal of Mechanical Engineering and
Technology, 9(12), 630-646
[2] Adesina O. S, Agunbiade D. A., Osundina S. A. (2017). Bayesian Regression Model for
Counts in Scholarship, Journal of Mathematical Theory and Modelling. 7(9), 46-57
[3] Cameron A. C., Trivedi P. K. (2005). Micro econometrics Methods and Application;
Cambridge University Press
[4] Cameron A. C., Trivedi P. K., Milne F., Piggott J. (1988). A Micro econometric Model of the
Demand for Health Care and Health Insurance in Australia, Review of Economic Studies, 55,
85–106
[5] Christensen K., Doblhammer G., Rau R., Vaupel J. W. (2009). Ageing populations: The
challenges ahead, Lancet, 374(9696), 1196-1208
[6] Famoye F. (1993). Restricted generalized Poisson regression model, Communications in
Statistics-Theory and Methods, 22(5), 1335–1354
[7] Gagnon D. R., Doron-LaMarca S., Bell M., O'Farrell T. J., Taft C.T. (2008). Poisson
regression for modeling count and frequency outcomes in trauma research, Journal of
Traumatic Stress, 21(5), 448-454
[8] Grootendorst P. (1995). Effects of Drug Plan Eligibility on Prescription Drug Utilization, Ph.
D. Dissertation, McMaster University.
[9] Heller G. Z., Mikis S. D., Rigby R. A., De Jong P. (2007). Mean and dispersion modelling
for policy claims costs, Scandinavian Actuarial Journal, 4, 281-292
[10] Hilbe J. M. (2007). Negative Binomial Regression. Cambridge: Cambridge University Press.
[11] Hilbe J. M. (2016). COUNT: Functions, Data and Code for Count Data. R package
version 1.3.4. https://CRAN.R-project.org/package=COUNT
[12] Kalyanam K., Borle S., Boatwright P. (2007). Deconstructing each items category
contribution, Marketing Science, 26(3), 327-341. doi: 10.1287/mksc.1070.0270. URL
http://pubsonline.informs.org/ doi/abs/10.1287/mksc.1070.0270.
[13] Nelder J. A., Wedderburn R. W. M. (1972). Generalized Linear Models, Journal of the Royal
Statistical Society. Series A (General), 135(3), 370–384
[14] Minka T. P., Shmueli G., Kadane J. B., Borle S., Boatwright P. (2003). Computing with the
COM-Poisson distribution. Technical report, CMU Statistics Department.
[15] Gupta R., Marino B. S., Cnota J. F., Ittenbach R. F. (2013) Finding the right distribution for
highly skewed zero-inflated clinical data, Epidemiology Biostatistics and Public Health,
10(1). https://doi.org/10.2427/8732
[16] R Core Team (2018). R: A language and environment for statistical computing. R Foundation
for Statistical Computing, Vienna, Austria. URL https://www.Rproject.org/
9. Adaptive Regression Model for Highly Skewed Count Data
http://www.iaeme.com/IJMET/index.asp 1972 editor@iaeme.com
[17] Rastogi M. K., Oguntunde P. E. (2018). Classical and Bays estimation of reliability
characteristics of the Kumaraswamy-Inverse Exponential distribution, International Journal
of System Assurance Engineering and Management (Online First),
https://doi.org/10.1007/s13198-018-0744-7
[18] Ridout M. S., Besbeas P. (2004). An empirical model for underdispersed count data,
Statistical Modelling, 4(1), 77-89, ISSN 1471-082X
[19] Romundstad P., Andersen A., Haldorsen T. (2001). Cancer incidence among workers in the
Norwegian silicon carbide industry. American Journal of Epidemiology, 153(10), 978-986
[20] Sellers K. F., Shmueli G. (2010). A Flexible Regression Model for Count Data, The
Annal of Applied Statistics, 4(2), 943-961
[21] Shmueli G., Minka T. P., Kadane J. B., Borle S., Boatwright P. (2005). A useful distribution
for fitting discrete data: revival of the Conway-Maxwell-Poisson distribution, Journal of the
Royal Statistical Society: Series C, 54(1), 127-142
[22] Winkelmann R. (2004). Health care reform and the number of doctor visits: An econometric
analysis, Journal of Applied Econometrics, 19(4), 455-472
[23] Yau K. K. W., Wang K., Lee A. H. (2003). Zero-inflated negative binomial mixed regression
modeling of over-dispersed count data with extra zeros, Biometrical Journal, 45(4), 437-452
[24] Yip P. (1988). Inference about the mean of a Poisson distribution in the presence of a nuisance
parameter, Australian Journal of Statistics, 30, 299-306
[25]