- The document proposes a novel composition forecasting model based on the Choquet integral with respect to a completed extensional L-measure and M-density fuzzy measure.
- It compares the performance of this model to other composition forecasting models, including ones based on extensional L-measure, L-measure, lambda-measure, and P-measure, as well as ridge regression and multiple linear regression models.
- Experimental results on grain production time series data show that the proposed Choquet integral composition forecasting model with completed extensional L-measure and M-density outperforms the other models.
The document discusses the Fundamental Theorem of Calculus, which has two parts. Part 1 establishes the relationship between differentiation and integration, showing that the derivative of an antiderivative is the integrand. Part 2 allows evaluation of a definite integral by evaluating the antiderivative at the bounds. Examples are given of using both parts to evaluate definite integrals. The theorem unified differentiation and integration and was fundamental to the development of calculus.
IT IS ABOUT FUSION OF TWO NATURE INSPIRED OPTIMIZATION ALGORITHM(S).THE FIRST ONE IS GRAVITATIONAL SEARCH ALGORITHM(GSA) BASED ON NEWTONS UNIVERSAL LAW OF GRAVITATION AND OTHER ONE i.e; BIOGEOGRAPHY BASED OPTIMIZATION(BBO) BASED ON BIOGEOGRAPGY (THE STUDY OF SPECIES IN A PARTICULAR HABITAT).
This summarizes a document about a filter-and-refine approach for reducing computational cost when performing correlation analysis on pairs of spatial time series datasets. It groups similar time series within each dataset into "cones" based on spatial autocorrelation. Cone-level correlation computation can then filter out many element pairs whose correlation is clearly below a threshold. The remaining pairs require individual correlation computation in the refinement phase. Experiments on Earth science datasets showed significant computational savings, especially with high correlation thresholds.
Statistical Analysis and Model Validation of Gompertz Model on different Real...Editor Jacotech
This document summarizes statistical analysis and model validation of the Gompertz model on different real data sets for reliability modeling. It presents the maximum likelihood estimation of parameters for the Gompertz model using the Newton-Raphson method. Goodness of fit tests including the Kolmogorov-Smirnov test and quantile-quantile plot are used to validate the Gompertz model on six different real data sets and determine which data sets provide the best fit for parameter estimation of the Gompertz model.
This document discusses prior selection for mixture estimation. It begins by introducing mixture models and their common parameterization. It then discusses several types of weakly informative priors that can be used for mixture models, including empirical Bayes priors, hierarchical priors, and reparameterizations. It notes challenges with using improper priors for mixture models. The document also discusses saturated priors when the number of components is not known beforehand. It covers Jeffreys priors for mixtures and issues around propriety. It proposes some reparameterizations of mixtures, like using moments or a spherical reparameterization, that allow proper Jeffreys-like priors to be defined.
The document presents four numerical methods for finding the nth root of a positive number: the Fixed-b method, Adaptive-b method, Simplified Adaptive-b method, and Newton-Raphson Improved (NRI) method. It analyzes the convergence properties and rate of convergence for each method. The Fixed-b and Adaptive-b methods introduce a parameter b that can improve convergence, and the Adaptive-b method allows b to adapt at each iteration. The NRI method is derived from the Simplified Adaptive-b method and shows faster convergence than Newton-Raphson in some cases.
Fuzzy inventory model with shortages in man power planningAlexander Decker
This document presents a fuzzy inventory model to determine the optimal time for an employee to change jobs while minimizing costs. It introduces concepts like real wage, membership functions, and fuzzy nonlinear programming. The model considers costs of decreasing real income, moving to a new job, and income shortages. It uses Lagrange multipliers to solve the fuzzy nonlinear programming problem and compares the results to a crisp model. A numerical example is provided to illustrate the application to manpower planning.
The document describes a three-parameter generalized inverse Weibull (GIW) distribution that can model failure rates. Key properties of the GIW distribution include:
- It reduces to the inverse Weibull distribution when the shape parameter γ equals 1.
- Its probability density function, survival function, and hazard function are defined.
- Formulas are provided for the moments, moment generating function, and Shannon entropy of the GIW distribution.
- Methods are described for maximum likelihood estimation of the GIW distribution parameters from censored lifetime data.
The document discusses the Fundamental Theorem of Calculus, which has two parts. Part 1 establishes the relationship between differentiation and integration, showing that the derivative of an antiderivative is the integrand. Part 2 allows evaluation of a definite integral by evaluating the antiderivative at the bounds. Examples are given of using both parts to evaluate definite integrals. The theorem unified differentiation and integration and was fundamental to the development of calculus.
IT IS ABOUT FUSION OF TWO NATURE INSPIRED OPTIMIZATION ALGORITHM(S).THE FIRST ONE IS GRAVITATIONAL SEARCH ALGORITHM(GSA) BASED ON NEWTONS UNIVERSAL LAW OF GRAVITATION AND OTHER ONE i.e; BIOGEOGRAPHY BASED OPTIMIZATION(BBO) BASED ON BIOGEOGRAPGY (THE STUDY OF SPECIES IN A PARTICULAR HABITAT).
This summarizes a document about a filter-and-refine approach for reducing computational cost when performing correlation analysis on pairs of spatial time series datasets. It groups similar time series within each dataset into "cones" based on spatial autocorrelation. Cone-level correlation computation can then filter out many element pairs whose correlation is clearly below a threshold. The remaining pairs require individual correlation computation in the refinement phase. Experiments on Earth science datasets showed significant computational savings, especially with high correlation thresholds.
Statistical Analysis and Model Validation of Gompertz Model on different Real...Editor Jacotech
This document summarizes statistical analysis and model validation of the Gompertz model on different real data sets for reliability modeling. It presents the maximum likelihood estimation of parameters for the Gompertz model using the Newton-Raphson method. Goodness of fit tests including the Kolmogorov-Smirnov test and quantile-quantile plot are used to validate the Gompertz model on six different real data sets and determine which data sets provide the best fit for parameter estimation of the Gompertz model.
This document discusses prior selection for mixture estimation. It begins by introducing mixture models and their common parameterization. It then discusses several types of weakly informative priors that can be used for mixture models, including empirical Bayes priors, hierarchical priors, and reparameterizations. It notes challenges with using improper priors for mixture models. The document also discusses saturated priors when the number of components is not known beforehand. It covers Jeffreys priors for mixtures and issues around propriety. It proposes some reparameterizations of mixtures, like using moments or a spherical reparameterization, that allow proper Jeffreys-like priors to be defined.
The document presents four numerical methods for finding the nth root of a positive number: the Fixed-b method, Adaptive-b method, Simplified Adaptive-b method, and Newton-Raphson Improved (NRI) method. It analyzes the convergence properties and rate of convergence for each method. The Fixed-b and Adaptive-b methods introduce a parameter b that can improve convergence, and the Adaptive-b method allows b to adapt at each iteration. The NRI method is derived from the Simplified Adaptive-b method and shows faster convergence than Newton-Raphson in some cases.
Fuzzy inventory model with shortages in man power planningAlexander Decker
This document presents a fuzzy inventory model to determine the optimal time for an employee to change jobs while minimizing costs. It introduces concepts like real wage, membership functions, and fuzzy nonlinear programming. The model considers costs of decreasing real income, moving to a new job, and income shortages. It uses Lagrange multipliers to solve the fuzzy nonlinear programming problem and compares the results to a crisp model. A numerical example is provided to illustrate the application to manpower planning.
The document describes a three-parameter generalized inverse Weibull (GIW) distribution that can model failure rates. Key properties of the GIW distribution include:
- It reduces to the inverse Weibull distribution when the shape parameter γ equals 1.
- Its probability density function, survival function, and hazard function are defined.
- Formulas are provided for the moments, moment generating function, and Shannon entropy of the GIW distribution.
- Methods are described for maximum likelihood estimation of the GIW distribution parameters from censored lifetime data.
This document discusses Bayesian model comparison in cosmology using population Monte Carlo methods. It provides background on key questions in cosmology that can be addressed using cosmic microwave background data from experiments like WMAP and Planck. Population Monte Carlo and adaptive importance sampling methods are introduced to help approximate Bayesian evidence for different cosmological models given the immense computational challenges of working with this cosmological data.
In this paper, Mine Blast Algorithm (MBA) has been intermingled with Harmony Search (HS) algorithm for solving optimal reactive power dispatch problem. MBA is based on explosion of landmines and HS is based on Creativeness progression of musicians-both are hybridized to solve the problem. In MBA Initial distance of shrapnel pieces are reduced gradually to allow the mine bombs search the probable global minimum location in order to amplify the global explore capability. Harmony search (HS) imitates the music creativity process where the musicians supervise their instruments’ pitch by searching for a best state of harmony. Hybridization of Mine Blast Algorithm with Harmony Search algorithm (MH) improves the search effectively in the solution space. Mine blast algorithm improves the exploration and harmony search algorithm augments the exploitation. At first the proposed algorithm starts with exploration & gradually it moves to the phase of exploitation. Proposed Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) has been tested on standard IEEE 14, 300 bus test systems. Real power loss has been reduced considerably by the proposed algorithm. Then Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) tested in IEEE 30, bus system (with considering voltage stability index)- real power loss minimization, voltage deviation minimization, and voltage stability index enhancement has been attained.
This paper introduces a new strategy called iterative parameter mixing for distributed training of structured perceptrons. It proves that this strategy converges and separates data if separable. Experiments on named entity recognition and dependency parsing show it trains faster and more accurately than serial training, and achieves results close to averaged perceptrons. While it benefits from parallelization, more shards can slow convergence due to increased synchronization costs.
This document provides solutions to logistics management assignment problems. It addresses 6 problems related to network flows, facility location, and vehicle routing. For problem 1, it shows that the difference between the sum of outdegrees and indegrees in a network is always zero. For problem 2, it proves Goldman's majority theorem and provides an algorithm to solve the 1-median problem. The solutions utilize concepts like isthmus edges, node weights, and network separation.
1) Likelihood-free Bayesian experimental design is discussed as an intractable likelihood optimization problem, where the goal is to find the optimal design d that minimizes expected loss without using the full posterior distribution.
2) Several Bayesian tools are proposed to make the design problem more Bayesian, including Bayesian non-parametrics, annealing algorithms, and placing a posterior on the design d.
3) Gaussian processes are a default modeling choice for complex unknown functions in these problems, but their accuracy is difficult to assess and they may incur a dimension curse.
comments on exponential ergodicity of the bouncy particle samplerChristian Robert
The document summarizes recent work on establishing theoretical convergence rates for the bouncy particle sampler (BPS), a non-reversible Markov chain Monte Carlo algorithm. The main results show that under certain conditions on the target distribution, including having exponentially decaying tails, the BPS exhibits exponential ergodicity. A central limit theorem is also established. The analysis considers different cases for thin-tailed, thick-tailed, and transformed target distributions.
Causal set theory is an approach to quantum gravity that represents spacetime as a locally finite partially ordered set of points with causal relations. It is a minimalist approach that does not assume an underlying spacetime continuum. There are two main methods to reconstruct a manifold from a causal set: 1) extracting manifold properties like dimension from causal sets that can be embedded in a manifold, and 2) sprinkling points randomly into an existing manifold to produce an embedded causal set. To study dynamics, an action must be defined on causal sets that reproduces the Einstein-Hilbert action in the continuum limit. Several proposals have been made to define nonlocal operators on causal sets that approach the d'Alembertian operator in the limit. Overall causal set
In this work, we propose to apply trust region optimization to deep reinforcement
learning using a recently proposed Kronecker-factored approximation to
the curvature. We extend the framework of natural policy gradient and propose
to optimize both the actor and the critic using Kronecker-factored approximate
curvature (K-FAC) with trust region; hence we call our method Actor Critic using
Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this
is the first scalable trust region natural gradient method for actor-critic methods.
It is also a method that learns non-trivial tasks in continuous control as well as
discrete control policies directly from raw pixel inputs. We tested our approach
across discrete domains in Atari games as well as continuous domains in the MuJoCo
environment. With the proposed methods, we are able to achieve higher
rewards and a 2- to 3-fold improvement in sample efficiency on average, compared
to previous state-of-the-art on-policy actor-critic methods. Code is available at
https://github.com/openai/baselines.
HEATED WIND PARTICLE’S BEHAVIOURAL STUDY BY THE CONTINUOUS WAVELET TRANSFORM ...cscpconf
Nowadays Continuous Wavelet Transform (CWT) as well as Fractal analysis is generally used for the Signal and Image processing application purpose. Our current work extends the field of application in case of CWT as well as Fractal analysis by applying it in case of the agitated wind particle’s behavioral study. In this current work in case of the agitated wind particle, we have mathematically showed that the wind particle’s movement exhibits the “Uncorrelated” characteristics during the convectional flow of it. It is also demonstrated here by the Continuous Wavelet Transform (CWT) as well as the Fractal analysis with matlab 7.12 version
Application of Graphic LASSO in Portfolio Optimization_Yixuan Chen & Mengxi J...Mengxi Jiang
- The document describes using graphical lasso to estimate the precision matrix of stock returns and apply portfolio optimization.
- Graphical lasso estimates the precision matrix instead of the covariance matrix to allow for sparsity. This makes the estimation more efficient for large datasets.
- The study uses 8 different models to simulate stock return data and compares the performance of graphical lasso, sample covariance, and shrinkage estimators on portfolio optimization of in-sample and out-of-sample test data. Graphical lasso performed best on out-of-sample test data, showing it can generate portfolios that generalize well.
This document presents a new generalized Lindley distribution (NGLD). The NGLD contains the gamma, exponential, and Lindley distributions as special cases. Statistical properties of the NGLD like the hazard function, moments, and moment generating function are derived. Maximum likelihood estimation is discussed to estimate the parameters of the NGLD. Two real data sets are analyzed to illustrate the usefulness of the new distribution.
Generic Reinforcement Schemes and Their Optimizationinfopapers
Dana Simian, Florin Stoica, Generic Reinforcement Schemes and Their Optimization, Proceedings of the 5th European Computing Conference (ECC ’11), Paris, France, April 28-30, 2011, pp. 332-337
This document discusses various methods for approximating marginal likelihoods and Bayes factors, including:
1. Geyer's 1994 logistic regression approach for approximating marginal likelihoods using importance sampling.
2. Bridge sampling and its connection to Geyer's approach. Optimal bridge sampling requires knowledge of unknown normalizing constants.
3. Using mixtures of importance distributions and the target distribution as proposals to estimate marginal likelihoods through Rao-Blackwellization. This connects to bridge sampling estimates.
4. The document discusses various methods for approximating marginal likelihoods and comparing hypotheses using Bayes factors. It outlines the historical development and connections between different approximation techniques.
Parallel hybrid chicken swarm optimization for solving the quadratic assignme...IJECEIAES
In this research, we intend to suggest a new method based on a parallel hybrid chicken swarm optimization (PHCSO) by integrating the constructive procedure of GRASP and an effective modified version of Tabu search. In this vein, the goal of this adaptation is straightforward about the fact of preventing the stagnation of the research. Furthermore, the proposed contribution looks at providing an optimal trade-off between the two key components of bio-inspired metaheuristics: local intensification and global diversification, which affect the efficiency of our proposed algorithm and the choice of the dependent parameters. Moreover, the pragmatic results of exhaustive experiments were promising while applying our algorithm on diverse QAPLIB instances . Finally, we briefly highlight perspectives for further research.
This document discusses various importance sampling methods for approximating Bayes factors, which are used for Bayesian model selection. It compares regular importance sampling, bridge sampling, harmonic means, mixtures to bridge sampling, and Chib's solution. An example application to probit modeling of diabetes in Pima Indian women is presented to illustrate regular importance sampling. Markov chain Monte Carlo methods like the Metropolis-Hastings algorithm and Gibbs sampling can be used to sample from the probit models.
This document presents a novel Copula based approach to generate critical sea states given a target reliability index based on the return period of the extreme event. Copula based approach is much more flexible and powerful when compared to conventional approaches using linear correlation coefficient.
The document summarizes Approximate Bayesian Computation (ABC). It discusses how ABC provides a way to approximate Bayesian inference when the likelihood function is intractable or too computationally expensive to evaluate directly. ABC works by simulating data under different parameter values and accepting simulations that are close to the observed data according to a distance measure and tolerance level. Key points discussed include:
- ABC provides an approximation to the posterior distribution by sampling from simulations that fall within a tolerance of the observed data.
- Summary statistics are often used to reduce the dimension of the data and improve the signal-to-noise ratio when applying the tolerance criterion.
- Random forests can help select informative summary statistics and provide semi-automated ABC
Bayesian Segmentation in Signal with Multiplicative Noise Using Reversible Ju...TELKOMNIKA JOURNAL
This paper proposes the important issues in signal segmentation. The signal is disturbed by
multiplicative noise where the number of segments is unknown. A Bayesian approach is proposed to
estimate the parameter. The parameter includes the number of segments, the location of the segment, and
the amplitude. The posterior distribution for the parameter does not have a simple equation so that the
Bayes estimator is not easily determined. Reversible Jump Markov chain Monte Carlo (MCMC) method is
adopted to overcome the problem. The Reversible Jump MCMC method creates a Markov chain whose
distribution is close to the posterior distribution. The performance of the algorithm is shown by simulation
data. The result of this simulation shows that the algorithm works well. As an application, the algorithm is
used to segment a Synthetic Aperture Radar (SAR) signal. The advantage of this method is that the
number of segments, the position of the segment change, and the amplitude are estimated simultaneously.
Germ cell tumors are a varied group of benign and malignant neoplasms that can occur in gonadal and extragonadal sites. Teratoma is the most common fetal and neonatal tumor, usually appearing as mature or immature teratomas in sacrococcygeal, cervical, and retroperitoneal areas. Yolk sac tumor is the second most frequent germ cell tumor in infants and most often arises in the testis. Cytogenetic analysis has shown nonrandom chromosomal abnormalities involving chromosomes 1 and 12 in germ cell tumors.
Toronto Presentation - A Least Square Ratio (LSR) Approach to Fuzzy Linear Re...Murat YAZICI, M.Sc.
The document discusses fuzzy linear regression analysis and a new method called the Least Squares Ratio (LSR) approach. It presents the LSR method as an alternative to overcome limitations of traditional ordinary least squares regression. A case study applies the LSR approach and ordinary least squares to analyze the effects of video display terminal position on an operator. The LSR approach provided better results and would perform better in cases with outliers.
This document discusses Bayesian model comparison in cosmology using population Monte Carlo methods. It provides background on key questions in cosmology that can be addressed using cosmic microwave background data from experiments like WMAP and Planck. Population Monte Carlo and adaptive importance sampling methods are introduced to help approximate Bayesian evidence for different cosmological models given the immense computational challenges of working with this cosmological data.
In this paper, Mine Blast Algorithm (MBA) has been intermingled with Harmony Search (HS) algorithm for solving optimal reactive power dispatch problem. MBA is based on explosion of landmines and HS is based on Creativeness progression of musicians-both are hybridized to solve the problem. In MBA Initial distance of shrapnel pieces are reduced gradually to allow the mine bombs search the probable global minimum location in order to amplify the global explore capability. Harmony search (HS) imitates the music creativity process where the musicians supervise their instruments’ pitch by searching for a best state of harmony. Hybridization of Mine Blast Algorithm with Harmony Search algorithm (MH) improves the search effectively in the solution space. Mine blast algorithm improves the exploration and harmony search algorithm augments the exploitation. At first the proposed algorithm starts with exploration & gradually it moves to the phase of exploitation. Proposed Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) has been tested on standard IEEE 14, 300 bus test systems. Real power loss has been reduced considerably by the proposed algorithm. Then Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) tested in IEEE 30, bus system (with considering voltage stability index)- real power loss minimization, voltage deviation minimization, and voltage stability index enhancement has been attained.
This paper introduces a new strategy called iterative parameter mixing for distributed training of structured perceptrons. It proves that this strategy converges and separates data if separable. Experiments on named entity recognition and dependency parsing show it trains faster and more accurately than serial training, and achieves results close to averaged perceptrons. While it benefits from parallelization, more shards can slow convergence due to increased synchronization costs.
This document provides solutions to logistics management assignment problems. It addresses 6 problems related to network flows, facility location, and vehicle routing. For problem 1, it shows that the difference between the sum of outdegrees and indegrees in a network is always zero. For problem 2, it proves Goldman's majority theorem and provides an algorithm to solve the 1-median problem. The solutions utilize concepts like isthmus edges, node weights, and network separation.
1) Likelihood-free Bayesian experimental design is discussed as an intractable likelihood optimization problem, where the goal is to find the optimal design d that minimizes expected loss without using the full posterior distribution.
2) Several Bayesian tools are proposed to make the design problem more Bayesian, including Bayesian non-parametrics, annealing algorithms, and placing a posterior on the design d.
3) Gaussian processes are a default modeling choice for complex unknown functions in these problems, but their accuracy is difficult to assess and they may incur a dimension curse.
comments on exponential ergodicity of the bouncy particle samplerChristian Robert
The document summarizes recent work on establishing theoretical convergence rates for the bouncy particle sampler (BPS), a non-reversible Markov chain Monte Carlo algorithm. The main results show that under certain conditions on the target distribution, including having exponentially decaying tails, the BPS exhibits exponential ergodicity. A central limit theorem is also established. The analysis considers different cases for thin-tailed, thick-tailed, and transformed target distributions.
Causal set theory is an approach to quantum gravity that represents spacetime as a locally finite partially ordered set of points with causal relations. It is a minimalist approach that does not assume an underlying spacetime continuum. There are two main methods to reconstruct a manifold from a causal set: 1) extracting manifold properties like dimension from causal sets that can be embedded in a manifold, and 2) sprinkling points randomly into an existing manifold to produce an embedded causal set. To study dynamics, an action must be defined on causal sets that reproduces the Einstein-Hilbert action in the continuum limit. Several proposals have been made to define nonlocal operators on causal sets that approach the d'Alembertian operator in the limit. Overall causal set
In this work, we propose to apply trust region optimization to deep reinforcement
learning using a recently proposed Kronecker-factored approximation to
the curvature. We extend the framework of natural policy gradient and propose
to optimize both the actor and the critic using Kronecker-factored approximate
curvature (K-FAC) with trust region; hence we call our method Actor Critic using
Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this
is the first scalable trust region natural gradient method for actor-critic methods.
It is also a method that learns non-trivial tasks in continuous control as well as
discrete control policies directly from raw pixel inputs. We tested our approach
across discrete domains in Atari games as well as continuous domains in the MuJoCo
environment. With the proposed methods, we are able to achieve higher
rewards and a 2- to 3-fold improvement in sample efficiency on average, compared
to previous state-of-the-art on-policy actor-critic methods. Code is available at
https://github.com/openai/baselines.
HEATED WIND PARTICLE’S BEHAVIOURAL STUDY BY THE CONTINUOUS WAVELET TRANSFORM ...cscpconf
Nowadays Continuous Wavelet Transform (CWT) as well as Fractal analysis is generally used for the Signal and Image processing application purpose. Our current work extends the field of application in case of CWT as well as Fractal analysis by applying it in case of the agitated wind particle’s behavioral study. In this current work in case of the agitated wind particle, we have mathematically showed that the wind particle’s movement exhibits the “Uncorrelated” characteristics during the convectional flow of it. It is also demonstrated here by the Continuous Wavelet Transform (CWT) as well as the Fractal analysis with matlab 7.12 version
Application of Graphic LASSO in Portfolio Optimization_Yixuan Chen & Mengxi J...Mengxi Jiang
- The document describes using graphical lasso to estimate the precision matrix of stock returns and apply portfolio optimization.
- Graphical lasso estimates the precision matrix instead of the covariance matrix to allow for sparsity. This makes the estimation more efficient for large datasets.
- The study uses 8 different models to simulate stock return data and compares the performance of graphical lasso, sample covariance, and shrinkage estimators on portfolio optimization of in-sample and out-of-sample test data. Graphical lasso performed best on out-of-sample test data, showing it can generate portfolios that generalize well.
This document presents a new generalized Lindley distribution (NGLD). The NGLD contains the gamma, exponential, and Lindley distributions as special cases. Statistical properties of the NGLD like the hazard function, moments, and moment generating function are derived. Maximum likelihood estimation is discussed to estimate the parameters of the NGLD. Two real data sets are analyzed to illustrate the usefulness of the new distribution.
Generic Reinforcement Schemes and Their Optimizationinfopapers
Dana Simian, Florin Stoica, Generic Reinforcement Schemes and Their Optimization, Proceedings of the 5th European Computing Conference (ECC ’11), Paris, France, April 28-30, 2011, pp. 332-337
This document discusses various methods for approximating marginal likelihoods and Bayes factors, including:
1. Geyer's 1994 logistic regression approach for approximating marginal likelihoods using importance sampling.
2. Bridge sampling and its connection to Geyer's approach. Optimal bridge sampling requires knowledge of unknown normalizing constants.
3. Using mixtures of importance distributions and the target distribution as proposals to estimate marginal likelihoods through Rao-Blackwellization. This connects to bridge sampling estimates.
4. The document discusses various methods for approximating marginal likelihoods and comparing hypotheses using Bayes factors. It outlines the historical development and connections between different approximation techniques.
Parallel hybrid chicken swarm optimization for solving the quadratic assignme...IJECEIAES
In this research, we intend to suggest a new method based on a parallel hybrid chicken swarm optimization (PHCSO) by integrating the constructive procedure of GRASP and an effective modified version of Tabu search. In this vein, the goal of this adaptation is straightforward about the fact of preventing the stagnation of the research. Furthermore, the proposed contribution looks at providing an optimal trade-off between the two key components of bio-inspired metaheuristics: local intensification and global diversification, which affect the efficiency of our proposed algorithm and the choice of the dependent parameters. Moreover, the pragmatic results of exhaustive experiments were promising while applying our algorithm on diverse QAPLIB instances . Finally, we briefly highlight perspectives for further research.
This document discusses various importance sampling methods for approximating Bayes factors, which are used for Bayesian model selection. It compares regular importance sampling, bridge sampling, harmonic means, mixtures to bridge sampling, and Chib's solution. An example application to probit modeling of diabetes in Pima Indian women is presented to illustrate regular importance sampling. Markov chain Monte Carlo methods like the Metropolis-Hastings algorithm and Gibbs sampling can be used to sample from the probit models.
This document presents a novel Copula based approach to generate critical sea states given a target reliability index based on the return period of the extreme event. Copula based approach is much more flexible and powerful when compared to conventional approaches using linear correlation coefficient.
The document summarizes Approximate Bayesian Computation (ABC). It discusses how ABC provides a way to approximate Bayesian inference when the likelihood function is intractable or too computationally expensive to evaluate directly. ABC works by simulating data under different parameter values and accepting simulations that are close to the observed data according to a distance measure and tolerance level. Key points discussed include:
- ABC provides an approximation to the posterior distribution by sampling from simulations that fall within a tolerance of the observed data.
- Summary statistics are often used to reduce the dimension of the data and improve the signal-to-noise ratio when applying the tolerance criterion.
- Random forests can help select informative summary statistics and provide semi-automated ABC
Bayesian Segmentation in Signal with Multiplicative Noise Using Reversible Ju...TELKOMNIKA JOURNAL
This paper proposes the important issues in signal segmentation. The signal is disturbed by
multiplicative noise where the number of segments is unknown. A Bayesian approach is proposed to
estimate the parameter. The parameter includes the number of segments, the location of the segment, and
the amplitude. The posterior distribution for the parameter does not have a simple equation so that the
Bayes estimator is not easily determined. Reversible Jump Markov chain Monte Carlo (MCMC) method is
adopted to overcome the problem. The Reversible Jump MCMC method creates a Markov chain whose
distribution is close to the posterior distribution. The performance of the algorithm is shown by simulation
data. The result of this simulation shows that the algorithm works well. As an application, the algorithm is
used to segment a Synthetic Aperture Radar (SAR) signal. The advantage of this method is that the
number of segments, the position of the segment change, and the amplitude are estimated simultaneously.
Germ cell tumors are a varied group of benign and malignant neoplasms that can occur in gonadal and extragonadal sites. Teratoma is the most common fetal and neonatal tumor, usually appearing as mature or immature teratomas in sacrococcygeal, cervical, and retroperitoneal areas. Yolk sac tumor is the second most frequent germ cell tumor in infants and most often arises in the testis. Cytogenetic analysis has shown nonrandom chromosomal abnormalities involving chromosomes 1 and 12 in germ cell tumors.
Toronto Presentation - A Least Square Ratio (LSR) Approach to Fuzzy Linear Re...Murat YAZICI, M.Sc.
The document discusses fuzzy linear regression analysis and a new method called the Least Squares Ratio (LSR) approach. It presents the LSR method as an alternative to overcome limitations of traditional ordinary least squares regression. A case study applies the LSR approach and ordinary least squares to analyze the effects of video display terminal position on an operator. The LSR approach provided better results and would perform better in cases with outliers.
London Presentation - A Least Square Ratio (LSR) Approach to Fuzzy Linear Reg...Murat YAZICI, M.Sc.
The document discusses weighted least squares ratio (WLSR) methods for M-estimators as an alternative to ordinary least squares (OLS). It proposes using WLSR to fit initial regression models and calculate residuals, then applying weighting functions in an iteratively reweighted least squares ratio approach. A simulation study compares WLS and WLSR methods under different conditions, finding WLSR often outperforms WLS apart from with one weighting function. The document concludes WLSR may provide better results than WLS for M-estimation when outliers are present.
This document discusses fuzzy regression models. It begins by introducing fuzzy regression and its motivation for addressing situations where classical regression is problematic, such as small data sets or vagueness in relationships. It then defines the components of fuzzy regression models, including fuzzy coefficients represented by triangular membership functions. Two approaches to fuzzy regression are explored: Tanaka's possibilistic regression which minimizes coefficient fuzziness, and fuzzy least squares regression. The document uses a sample data set to illustrate key concepts throughout.
Dokumen tersebut membahas tentang logika fuzzy, mulai dari pengertian, sejarah, derajat kebenaran, variabel linguistik, kelebihan dan kekurangan, serta contoh aplikasi logika fuzzy seperti pada mesin cuci.
The document discusses the theory of demand and concepts related to demand analysis such as the law of demand, utility theory, indifference curves, and consumer equilibrium. It explains that demand analysis is important for business managers to understand factors that influence demand, how price changes affect quantity demanded, and consumer behavior. Key concepts covered include total utility, marginal utility, diminishing marginal utility, cardinal and ordinal utility, substitution and income effects, and different types of demand functions.
This document discusses time series analysis techniques in R, including decomposition, forecasting, clustering, and classification. It provides examples of decomposing the AirPassengers dataset, forecasting with ARIMA models, hierarchical clustering on synthetic control chart data using Euclidean and DTW distances, and classifying the control chart data using decision trees with DWT features. Accuracy of over 88% was achieved on the classification task.
The document discusses the economic concept of demand. It defines demand as the quantity of a product that consumers are willing and able to purchase at various price levels. Demand is determined by factors such as price, income, tastes, and population. The law of demand states that, all else equal, demand decreases as price increases. However, there are some exceptions such as Giffen goods where demand increases with price. The document also discusses individual demand, market demand, demand curves, determinants of demand, and extensions/contractions in demand.
A Monte Carlo strategy for structure multiple-step-head time series predictionGianluca Bontempi
The document proposes a Monte Carlo approach called SMC (Structured Monte Carlo) for multiple-step-ahead time series forecasting that takes into account the structural dependencies between predictions. It generates samples using a direct forecasting approach and weights them based on how well they satisfy dependencies identified by an iterated approach. Experiments on three benchmark datasets show the SMC approach achieves more accurate forecasts as measured by SMAPE than iterated, direct, or other comparison methods for most prediction horizons tested.
Financial Time Series Analysis Based On Normalized Mutual Information FunctionsIJCI JOURNAL
A method of predictability analysis of future values of financial time series is described. The method is based on normalized mutual information functions. In the analysis, the use of these functions allowed to refuse any restrictions on the distributions of the parameters and on the correlations between parameters. A comparative analysis of the predictability of financial time series of Tel Aviv 25 stock exchange has been carried out.
RESIDUALS AND INFLUENCE IN NONLINEAR REGRESSION FOR REPEATED MEASUREMENT DATAorajjournal
All observations don’t have equal significance in regression analysis. Diagnostics of observations is an important aspect of model building. In this paper, we use diagnostics method to detect residuals and influential points in nonlinear regression for repeated measurement data. Cook distance and Gauss newton method have been proposed to identify the outliers in nonlinear regression analysis and parameter estimation. Most of these techniques based on graphical representations of residuals, hat matrix and case deletion measures. The results
show us detection of single and multiple outliers cases in repeated measurement data. We use these techniques
to explore performance of residuals and influence in nonlinear regression model.
Bayes estimators for the shape parameter of pareto type iAlexander Decker
This document discusses Bayesian estimators for the shape parameter of the Pareto Type I distribution under different loss functions. It begins by introducing the Pareto distribution and some classical estimators for the shape parameter, including the maximum likelihood estimator, uniformly minimum variance unbiased estimator, and minimum mean squared error estimator. It then derives the Bayesian estimators under a generalized square error loss function and quadratic loss function. Both informative priors (Exponential distribution) and non-informative priors (Jeffreys prior) are considered. The performance of the estimators is compared using Monte Carlo simulations and mean squared errors.
Bayes estimators for the shape parameter of pareto type iAlexander Decker
This document discusses Bayesian estimators for the shape parameter of the Pareto Type I distribution under different loss functions. It begins by introducing the Pareto distribution and some classical estimators for the shape parameter, including the maximum likelihood estimator, uniformly minimum variance unbiased estimator, and minimum mean squared error estimator. It then derives the Bayesian estimators under a generalized square error loss function and quadratic loss function. Both informative priors (Exponential distribution) and non-informative priors (Jeffreys prior) are considered. The performance of the estimators is compared using Monte Carlo simulations and mean squared errors.
This document discusses using bootstrap methods to create confidence intervals for time series forecasts. It provides examples of time series data and introduces the AR(1) model. The document describes an algorithm for calculating a bootstrap confidence interval for forecasting from an AR(1) model. It then discusses a simulation study comparing empirical coverage rates of bootstrap confidence intervals under different parameters. Finally, it applies the bootstrap method to forecasting Gross National Product growth, comparing the results to a parametric approach.
This document discusses using bootstrap methods to create confidence intervals for time series forecasts. It provides background on time series models and the autoregressive (AR) process. It then presents an algorithm for calculating a bootstrap confidence interval for forecasts from an AR(1) model. A simulation study compares coverage rates for bootstrap confidence intervals under different parameters. Finally, the method is applied to US Gross National Product data to forecast and construct confidence intervals.
Cointegration and Long-Horizon Forecastingمحمد إسماعيل
This document summarizes research on comparing the accuracy of long-horizon forecasts from multivariate cointegrated systems versus univariate models that ignore cointegration. The main findings are:
1) When accuracy is measured using standard trace mean squared error, imposing cointegration provides no benefit over univariate models at long horizons.
2) Both multivariate and univariate long-horizon forecasts satisfy the cointegrating relationships exactly.
3) The cointegrating combinations of forecast errors from both approaches have finite variance at long horizons.
This document presents a new Bayesian model for the frequency-magnitude distribution of earthquakes. The model derives a probability distribution function for earthquake magnitudes based on marginalizing over the parameter β, which relates to the Gutenberg-Richter b parameter. The model provides an excellent fit to earthquake magnitude data from Chile, both before and after several major quakes. The model belongs to the generalized type-2 beta distribution family and can be viewed as a form of generalized superstatistics, connecting it to previous works on non-extensive statistics and complex systems.
The Sample Average Approximation Method for Stochastic Programs with Integer ...SSA KPI
The document describes a sample average approximation method for solving stochastic programs with integer recourse. It approximates the expected recourse cost function using a sample average based on a sample of scenarios. It shows that as the sample size increases, the solution to the sample average approximation problem converges exponentially fast to the optimal solution of the true stochastic program. It also describes statistical and deterministic techniques for validating candidate solutions. Preliminary computational results applying this method are also mentioned.
Analyzing high-frequency time series is increasingly useful with the current explosion in the availability of these data in several application areas, including but not limited to, climate, finance, health analytics, transportation, etc. This talk will give an overview of two statistical frameworks that could be useful for analyzing high-frequency financial time series leading to quantification of financial risk. These include a distribution free approach using penalized estimating functions for modeling inter-event durations and an approximate Bayesian approach for modeling counts of events in regular intervals. A few other potentially useful lines of research in this area will also be introduced.
The document describes a recursive algorithm for multi-step prediction with mixture models that have dynamic switching between components. It begins by introducing notations and reviewing individual models, including normal regression components and static/dynamic switching models. It then presents the mixture prediction algorithm, first for a static switching model by constructing a predictive distribution from weighted component predictions. For a dynamic switching model, it similarly takes point estimates from the previous time and substitutes them into components to make weighted averaged predictions over multiple steps. The algorithm is summarized as initializing component statistics and parameter estimates, then substituting previous estimates into components to obtain weighted mixture predictions for new data points.
This document summarizes preconditioning methods for large-scale variational data assimilation. It discusses:
1) 4D-Var, which uses observations to constrain model parameters and iteratively minimize a cost function to obtain an optimal estimate of the true state.
2) The GEOS-Chem forward model, which calculates chemical concentrations from initial to final time.
3) The adjoint model, which calculates the gradient of the cost function by backwards integration of the tangent linear model operators.
4) The L-BFGS optimization routine, which iteratively finds the minimum of the cost function using a limited-memory approximation of the Hessian to update model parameter estimates.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
This document summarizes a master's thesis that implemented a continuous sequential importance resampling (CSIR) algorithm to estimate predictive densities in stochastic volatility (SV) models. The thesis began with an introduction to relevant econometrics concepts. It then explained SV models and particle filtering approaches. The thesis described implementing and testing functions to develop an R package for CSIR estimation in SV models. Diagnostics and parameter estimates from simulated and real stock return data were reported. The thesis concluded by discussing the package's applications and potential for future development.
Exploring Support Vector Regression - Signals and Systems ProjectSurya Chandra
Our team competed in a Kaggle competition to predict the bike share use as a part of their capital bike share program in Washington DC using a powerful function approximation technique called support vector regression.
This document summarizes an analysis of using Support Vector Regression (SVR) to predict bike rental data from a bike sharing program in Washington D.C. It begins with an introduction to SVR and the bike rental prediction competition. It then shows that linear regression performs poorly on this non-linear problem. The document explains how SVR maps data into higher dimensions using kernel functions to allow for non-linear fits. It concludes by outlining the derivation of the SVR method using kernel functions to simplify calculations for the regression.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document presents a chain sampling plan for truncated life tests when lifetimes follow a generalized exponential distribution. The plan determines the minimum sample size needed to satisfy producer and consumer risks at specified quality levels in terms of the distribution's median. Tables 1 and 2 show the minimum sample sizes and corresponding acceptance numbers for different confidence levels. They also provide the operating characteristic function values for various ratios of the true scale parameter to the specified scale parameter, given a shape parameter of 2. The plan allows accepting a lot if defects are below an acceptance number and no defects occurred in preceding samples, improving on single sampling plans.
The chemistry of the actinide and transactinide elements (set vol.1 6)Springer
Actinium is the first member of the actinide series of elements according to its electronic configuration. Actinium closely resembles lanthanum chemically. The three most important isotopes of actinium are 227Ac, 228Ac, and 225Ac. 227Ac is a naturally occurring isotope in the uranium-actinium decay series with a half-life of 21.772 years. 228Ac is in the thorium decay series with a half-life of 6.15 hours. 225Ac is produced from 233U with applications in medicine.
Transition metal catalyzed enantioselective allylic substitution in organic s...Springer
This document provides an overview of computational studies of palladium-mediated allylic substitution reactions. It discusses the history and development of quantum mechanical and molecular mechanical methods used to study the structures and reactivity of allyl palladium complexes. In particular, density functional theory methods like B3LYP have been widely used to study reaction mechanisms and factors controlling selectivity. Continuum solvation models have also been important for properly accounting for reactions in solvent.
1) Ranchers in Idaho observed lambs born with cyclopia (one eye) due to ewes grazing on corn lily plants. Cyclopamine was identified as the compound responsible and was later found to inhibit the Hedgehog signaling pathway.
2) Nakiterpiosin and nakiterpiosinone were isolated from cyanobacterial sponges and shown to inhibit cancer cell growth. Their unique C-nor-D-homosteroid skeleton presented synthetic challenges.
3) The authors developed a convergent synthesis of nakiterpiosin involving a carbonylative Stille coupling and a photo-Nazarov cyclization. Model studies led them to propose a revised structure for n
This document reviews solid-state NMR techniques that have been used to determine the molecular structures of amyloid fibrils. It discusses five categories of NMR techniques: 1) homonuclear dipolar recoupling and polarization transfer via J-coupling, 2) heteronuclear dipolar recoupling, 3) correlation spectroscopy, 4) recoupling of chemical shift anisotropy, and 5) tensor correlation methods. Specific techniques described include rotational resonance, dipolar dephasing, constant-time dipolar dephasing, REDOR, and fpRFDR-CT. These techniques have provided insights into the hydrogen-bond registry, spatial organization, and backbone torsion angles of amyloid fibrils.
This document discusses principles of ionization and ion dissociation in mass spectrometry. It covers topics like ionization energy, processes that occur during electron ionization like formation of molecular ions and fragment ions, and ionization by energetic electrons. It also discusses concepts like vertical transitions, where electronic transitions occur much faster than nuclear motions. The document provides background information on fundamental gas phase ion chemistry concepts in mass spectrometry.
Higher oxidation state organopalladium and platinumSpringer
This document discusses the role of higher oxidation state platinum species in platinum-mediated C-H bond activation and functionalization. It summarizes that the original Shilov system, which converts alkanes to alcohols and chloroalkanes under mild conditions, involves oxidation of an alkyl-platinum(II) intermediate to an alkyl-platinum(IV) species by platinum(IV). This "umpolung" of the C-Pt bond facilitates nucleophilic attack and product formation rather than simple protonolysis back to alkane. Subsequent work has validated this mechanism and also demonstrated that platinum(IV) can be replaced by other oxidants, as long as they rapidly oxidize the
Principles and applications of esr spectroscopySpringer
- Electron spin resonance (ESR) spectroscopy is used to study paramagnetic substances, particularly transition metal complexes and free radicals, by applying a magnetic field and measuring absorption of microwave radiation.
- ESR spectra provide information about electronic structure such as g-factors and hyperfine couplings by measuring resonance fields. Pulse techniques also allow measurement of dynamic properties like relaxation.
- Paramagnetic species have unpaired electrons that create a magnetic moment. ESR detects transition between spin energy levels induced by microwave absorption under an applied magnetic field.
This document discusses crystal structures of inorganic oxoacid salts from the perspective of periodic graph theory and cation arrays. It analyzes 569 crystal structures of simple salts with the formulas My(LO3)z and My(XO4)z, where M are metal cations, L are nonmetal triangular anions, and X are nonmetal tetrahedral anions. The document finds that in about three-fourths of the structures, the cation arrays are topologically equivalent to binary compounds like NaCl, NiAs, and FeB. It proposes representing these oxoacid salts as a quasi-binary model My[L/X]z, where the cation arrays determine the crystal structure topology while the oxygens play a
Field flow fractionation in biopolymer analysisSpringer
This document summarizes a study that uses flow field-flow fractionation (FlFFF) to measure initial protein fouling on ultrafiltration membranes. FlFFF is used to determine the amount of sample recovered from membranes and insights into how retention times relate to the distance of the sample layer from the membrane wall. It was observed that compositionally similar membranes from different companies exhibited different sample recoveries. Increasing amounts of bovine serum albumin were adsorbed when the average distance of the sample layer was less than 11 mm. This information can help establish guidelines for flow rates to minimize fouling during ultrafiltration processes.
1) The document discusses phonons, which are quantized lattice vibrations in crystals that carry thermal energy. It describes modeling crystal vibrations using a harmonic lattice approach.
2) Normal modes of the lattice vibrations can be described as a set of independent harmonic oscillators. Quantum mechanically, these normal modes are quantized as phonons with discrete energy levels.
3) Phonons can be thought of as quasiparticles that carry momentum and energy in the crystal lattice. Their propagation is described using a phonon field approach rather than independent normal modes.
This chapter discusses 3D electroelastic problems and applied electroelastic problems. For 3D problems, it presents the potential function method for solving problems involving a penny-shaped crack and elliptic inclusions. It derives the governing equations and introduces potential functions to obtain the general static and dynamic solutions. For applied problems, it discusses simple electroelastic problems, laminated piezoelectric plates using classical and higher-order theories, and piezoelectric composite shells. It also presents a unified first-order approximate theory for electro-magneto-elastic thin plates.
Tensor algebra and tensor analysis for engineersSpringer
This document discusses vector and tensor analysis in Euclidean space. It defines vector- and tensor-valued functions and their derivatives. It also discusses coordinate systems, tangent vectors, and coordinate transformations. The key points are:
1. Vector- and tensor-valued functions can be differentiated using limits, with the derivatives being the vector or tensor equivalent of the rate of change.
2. Coordinate systems map vectors to real numbers and define tangent vectors along coordinate lines.
3. Under a change of coordinates, components of vectors and tensors transform according to the Jacobian of the coordinate transformation to maintain geometric meaning.
This document provides a summary of carbon nanofibers:
1) Carbon nanofibers are sp2-based linear filaments with diameters of around 100 nm that differ from continuous carbon fibers which have diameters of several micrometers.
2) Carbon nanofibers can be produced via catalytic chemical vapor deposition or via electrospinning and thermal treatment of organic polymers.
3) Carbon nanofibers exhibit properties like high specific area, flexibility, and strength due to their nanoscale diameters, making them suitable for applications like energy storage electrodes, composite fillers, and bone scaffolds.
Shock wave compression of condensed matterSpringer
This document provides an introduction and overview of shock wave physics in condensed matter. It discusses the assumptions made in treating one-dimensional plane shock waves in fluids and solids. It briefly outlines the history of the field in the United States, noting that accurate measurements of phase transitions from shock experiments established shock physics as a discipline and allowed development of a pressure calibration scale for static high pressure work. It describes some of the practical applications of shock wave experiments for providing high-pressure thermodynamic data, understanding explosive detonations, calibrating pressure scales, and enabling studies of materials under extreme conditions.
Polarization bremsstrahlung on atoms, plasmas, nanostructures and solidsSpringer
This document discusses the quantum electrodynamics approach to describing bremsstrahlung, or braking radiation, of a fast charged particle colliding with an atom. It derives expressions for the amplitude of bremsstrahlung on a one-electron atom within the first Born approximation. The amplitude has static and polarization terms. The static term corresponds to radiation from the incident particle in the nuclear field, reproducing previous results. The polarization term accounts for radiation from the atomic electron and contains resonant denominators corresponding to intermediate atomic states. The full treatment allows various limits to be taken, such as removing the nucleus or atomic electron, reproducing known results from quantum electrodynamics.
Nanostructured materials for magnetoelectronicsSpringer
This document discusses experimental approaches to studying magnetization and spin dynamics in magnetic systems with high spatial and temporal resolution.
It describes using time-resolved X-ray photoemission electron microscopy (TR-XPEEM) to image the temporal evolution of magnetization in magnetic thin films with picosecond time resolution. Results are presented showing the changing domain structure in a Permalloy thin film following excitation with a magnetic field pulse. Different rotation mechanisms are observed depending on the initial orientation of the magnetization with respect to the applied field.
A novel pump-probe magneto-optical Kerr effect technique using higher harmonic generation is also discussed for addressing spin dynamics in magnetic systems with femtosecond time resolution and element selectivity.
This document discusses nanomaterials for biosensors and implantable biodevices. It describes how nanostructured thin films have enabled the development of more sensitive electrochemical biosensors by improving the detection of specific molecules. Two common techniques for creating nanostructured thin films are described - Langmuir-Blodgett films and layer-by-layer films. These techniques allow for the precise control of film thickness at the nanoscale and have been used to immobilize biomolecules like enzymes to create biosensors. Recent research is also exploring how these nanostructured films and biomolecules can be used to create implantable biosensors for real-time monitoring inside the body.
Modern theory of magnetism in metals and alloysSpringer
This document provides an introduction to magnetism in solids. It discusses how magnetic moments originate from electron spin and orbital angular momentum at the atomic level. In solids, electron localization determines whether magnetic properties are described by localized atomic moments or collective behavior of delocalized electrons. The key concepts of metals and insulators are introduced. The document then presents the basic Hamiltonian used to describe magnetism in solids, including terms for kinetic energy, electron-electron interactions, spin-orbit coupling, and the Zeeman effect. It also discusses how atomic orbitals can be used as a basis set to represent the Hamiltonian and describes the symmetry properties of s, p, and d orbitals in cubic crystals.
This chapter introduces and classifies various types of damage that can occur in structures. Damage can be caused by forces, deformations, aggressive environments, or temperatures. It can occur suddenly or over time. The chapter discusses different damage mechanisms including corrosion, excessive deformation, plastic instability, wear, and fracture. It also introduces concepts that will be covered in more detail later such as damage mechanics, fracture mechanics, and the influence of microstructure on damage and fracture. The chapter aims to provide an overview of damage types before exploring specific mechanisms and analyses in later chapters.
This document summarizes research on identifying spin-wave eigen-modes in a circular spin-valve nano-pillar using Magnetic Resonance Force Microscopy (MRFM). Key findings include:
1) Distinct spin-wave spectra are observed depending on whether the nano-pillar is excited by a uniform in-plane radio-frequency magnetic field or by a radio-frequency current perpendicular to the layers, indicating different excitation mechanisms.
2) Micromagnetic simulations show the azimuthal index φ is the discriminating parameter, with only φ=0 modes excited by the uniform field and only φ=+1 modes excited by the orthogonal current-induced Oersted field.
3) Three indices are used to label resonance
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
2. 120 H.-C. Liu
1 Introduction
The composition forecasting model is first considered in the work of Bates and
Granger (1969) [1]. They are now in a widespread use in many areas, especially in
economic field. Zhang Wang and Gao (2008) [2] applied the linear composition
forecasting model which composed the time series model, the second-order
exponential smoothing model and GM(1,1) forecasting model in the Agricultural
Economy Research, the GM(1,1) is one of the most frequently used grey
forecasting model, it is a time series forecasting model, encompassing a group of
differential equations adapted for parameter variance, rather than a first order
differential equation [3-4]. In our previous works [5-9], we extended the work of
Zhang, Wang, and Gao by proposing some nonlinear composition forecasting
model which also composed the time series model, the second-order exponential
smoothing model and GM(1,1) forecasting model by using the ridge regression
model [5] and the theory of Choquet integral with respect to some fuzzy measures,
including Sugeno’s λ-measure [13], Zadeh’s P-measure [14] and authors’ fuzzy
measures, L-measure, extensional L-measure and completed extensional L-
measure [6-12]. Since the first two well-known fuzzy measures are univalent
measures, each of them has just one feasible fuzzy measure satisfying the
conditions of its own definition, but the others proposed by our previous works are
multivalent fuzzy measures, all of them have infinitely feasible fuzzy measures
satisfying the conditions of their own definition. The fuzzy measure based
Choquet integral composition forecasting models are supervised methods, by
comparing the mean square errors between the estimated values and the
corresponding true values, each of our multivalent fuzzy measures based
forecasting models has more opportunity to find the better feasible fuzzy measure,
the performances of them are always better than the one based on the univalent
fuzzy measures, λ-measure and P-measure. In addition, the author has proved that
the P-measure is a special case of the L-measure [7], we know that all of the
extended multivalent fuzzy measures of L-measure are at lest as good as their
special case P-measure. However, the λ-measure is not a special case of the L-
measure, so the improved L-measure, called extensional L-measure, was proposed
to contain the λ-measure as a special case [7]. And then, all of the P-measure, λ-
measure and L-measure are special cases of the extensional L-measure. However,
the extensional L-measure does not attend the largest fuzzy measure B-measure, it
is not a completed fuzzy measure, for overcoming this drawback, an improved
extensional L-measure, called completed extensional L-measure was proposed, all
of other above-mentioned fuzzy measures proposed are the special cases of it. The
real data experiment showed that the extensional L-measure Choquet integral
based composition forecasting model is the best one. On the other hand, all of
above mentioned Choquet integral composition forecasting models with some
different fuzzy measures are based on N-density. From the definition of Choquet
integral and fuzzy measures, we know that the Choquet integral can be viewed as
a function of its fuzzy measure, and the fuzzy measure can be viewed as a function
of its fuzzy density function, therefore, the performance of any Choquet integral
is predominate by its fuzzy measure, and the performance of any fuzzy measure is
3. 6 A Novel Choquet Integral Composition Forecasting Model 121
predominate by its fuzzy density function, in other words, the performance of any
Choquet integral is predominate by its fuzzy density function. Since the older
fuzzy density function N-density is based on the linear correlation coefficient, the
new fuzzy density function M-density based on the mean square error is non-
linear, the relations among the composition forecasting model and three given
forecasting models are non-linear as well, hence, in the same Choquet integral
with respect to the same fuzzy measure, the performance of the non-linear fuzzy
density functions is always better than the linear fuzzy density functions.
In this paper, a novel fuzzy measure, called the completed extensional L-
measure, and the new fuzzy density function, M-density, are considered. Based on
the M-density and the proposed completed extensional L-measure, a novel
composition forecasting model is also considered. For comparing the forecasting
efficiency of two fuzzy densities M-density and N-density, is also considered.
2 The Composition Forecasting Model
In this paper, for evaluating the forecasting validation of forecasting model to
sequential data, the sequential mean square error is used, its formal definition is
listed as follows.
Definition 1. Sequential Mean Square Error (SMSE) [9-10]
If t jθ + is the realized value of target variable at time ( )t j+ , |
ˆ
t j tθ + is the forecasted
value of target variable at time ( )t j+ based on training data set from time 1 to
time t,
and ( )
( ) ( )
2
| 1
1
1ˆ ˆ ˆ
h
h
t t j t j t j
j
SMSE
h
θ θ θ+ + − +
=
= − (1)
then ( )
( )ˆ h
tSMSE θ is called the sequential mean square error (SMSE) of the h
forecasted values of target variable from time ( )1t + to time ( )t h+ based on
training data set from time 1 to time t. The composition forecasting model or
combination forecasting model can be defined as follows.
Definition 2. Composition Forecasting Model [9-10]
(i) Let ty be the realized value of target variable at time t.
(ii) Let ,1 ,2 ,, ,...,t t t mx x x be a set of m competing predictors of ty , ˆty be a function f
of ,1 ,2 ,, ,...,t t t mx x x with some parameters, denoted as
( ),1 ,2 ,
ˆ , ,...,t t t t my f x x x= (2)
4. 122 H.-C. Liu
(iii) Let | ,t j t kx + be the forecasted values of ty by competing predictor k at time
( )t j+ based on training data set from time 1 to time t, and for the same function f
as above,
Let ( )| ,1 ,2 ,
ˆ , ,...,t j t t j t j t j my f x x x+ + + += (3)
(iv) Let ( )
( ) ( )
2
| 1
1
1
ˆ ˆ
h
h
t t j t j t j
j
SMSE y y y
h
+ + − +
=
= − (4)
( )
( ) ( )
2
, ,
1
1 h
h
t k t j k t j
j
SMSE x x y
h
+ +
=
= − (5)
For current time t and the future h times, if
( )
( ) ( )
( ),
1
ˆ min
h h
t t k
k m
SMSE y SMSE x
≤ ≤
≤ (6)
then ˆty is called a composition forecasting model for the future h times of
,1 ,2 ,, ,...,t t t mx x x or, in brief, a composition forecasting model of ,1 ,2 ,, ,...,t t t mx x x .
Definition 3. Linear Combination Forecasting Model [9-10]
For given parameters
1
, 1
m
k k
k
Rβ β
=
∈ = , let
,
1
ˆ
m
t k t k
k
y xβ
=
= (7)
If ˆty is a composite forecasting model of ,1 ,2 ,, ,...,t t t mx x x then ˆty is called a linear
combination forecasting model or linear composition forecasting model,
otherwise, it is called a non-linear combination forecasting model or non-linear
composition forecasting model.
Definition 4. Ridge Regression Composition Forecasting Model [5,9,10]
(i) Let ( )1 2, ,...,
T
tt
y y y y= be realized data vector of target variable from time 1 to
time t, ( )1, 2, ,, , ,...,
T
k k t kt kx x x x= be a forecasted value vector of competing
predictor k of target variable ty from time 1 to time t.
(ii) Let tX be a forecasted value matrix of m competing predictors of target
variable ty from time 1 to time t.
(iii) Let ( )1 2
ˆ ˆ ˆ ˆ, ,...,
T
tt
y y y y= (8)
5. 6 A Novel Choquet Integral Composition Forecasting Model 123
( ) ( ),1 ,2 ,, ,...,t t t t mf X f x x x= (9)
(iv) Let ( ) ( ) ( ) ( )
( ) ( )
1
,1 ,2 ,, ,...,
T
r r r r T T
t t t m t t m tt t
X X rI X yβ β β β
−
= = + (10)
( ) ( )
ˆ r
t t tt
y f X X β= = (11)
Then ( ) ( )
|
ˆ r
t j t j tt j t
y f X X β+ ++
= = (12)
( )
( ) ( )
| ,1 ,2 ,
,1 ,2 , , ,
1
ˆ , ,...,
, ,...,
t j t t j t j t j m
m
r r
t j t j t j m t k t j kt
k
y f x x x
x x x xβ β
+ + + +
+ + + +
=
=
= =
(13)
For current time t and the future h times, if
( )
( ) ( )
( ),
1
ˆ min
h h
t t k
k m
SMSE y SMSE x
≤ ≤
≤ (14)
And ridge coefficient 0r = then ˆty is called a multiple linear regression
combination forecasting model of ,1 ,2 ,, ,...,t t t mx x x . If formula (14) is satisfied
and 0r > , then ˆty is called a ridge regression composition forecasting model
of ,1 ,2 ,, ,...,t t t mx x x . Note that Hoerl, Kenard, and Baldwin (1975) suggested that the
ridge coefficient of ridge regression is
( )
2
22
1
ˆ 1
ˆ ˆ,
t
i tT
i
t
m
r y y
t
σ
σ
β β =
= = − (15)
3 Choquet Integral Composition Forecasting Model
3.1 Fuzzy Measures [6-13]
Definition 5. Fuzzy Measure [6-13]
A fuzzy measure μ on a finite set X is a set function [ ]: 2 0,1X
μ → satisfying the
following axioms:
( ) ( )0, 1Xμ φ μ= = (boundary conditions) (16)
( ) ( )A B A Bμ μ⊆ ≤ (monotonicity) (17)
6. 124 H.-C. Liu
3.2 Fuzzy Density Function [6-10]
Definition 6. Fuzzy Density Function, Density [6-10]
(i) A fuzzy density function of a fuzzy measure μ on a finite set X is a function
[ ]: 0,1d X → satisfying:
( ) { }( ),d x x x Xμ= ∈ (18)
( )d x is called the density of singleton x .
(ii) A fuzzy density function is called a normalized fuzzy density function or a
density if it satisfying
( ) 1
x X
d x
∈
= (19)
Definition 7. Standard Fuzzy Measure [6-10]
A fuzzy measure is called a standard fuzzy measure, if its fuzzy density function is
a normalized fuzzy density function.
Definition 8. N-density [8-10]
Let μ be a fuzzy measure on a finite set { }1 2, ,..., nX x x x= , iy be global response of
subject i and ( )i jf x be the evaluation of subject i for singleton jx , satisfying:
( )0 1, 1,2,..., , 1,2,...,i jf x i N j n< < = = (20)
If ( )
( )( )
( )( )1
, 1,2,...,
j
N j n
j
j
r f x
d x j n
r f x
=
= =
(21)
Where ( )( )jr f x is the linear regression coefficient of iy on ( )jf x satisfying
( )( ) ,
0
j
j
y x
j
y x
S
r f x
S S
= ≥ (22)
2
2
1 1
1 1
N N
y i i
i i
S y y
N N= =
= −
(23)
( ) ( )
2
2
1 1
1 1
j
N N
x i j i j
i i
S f x f x
N N= =
= −
(24)
7. 6 A Novel Choquet Integral Composition Forecasting Model 125
( ) ( ),
1 1 1
1 1 1
j
N N N
y x i i i j i j
i i i
S y y f x f x
N N N= = =
= − −
(25)
then the function [ ]: 0,1Nd X → satisfying { }( ) ( ),Nx d x x Xμ = ∀ ∈ is a fuzzy
density function, called N-density of μ .
Note that
(i) N-density is a normalized fuzzy density function.
(ii) N-density is a linear fuzzy density function based on linear correlation
coefficients
3.3 M-Density [10]
We know that any linear function can be viewed as a special case of some
corresponding non-linear function, In this paper, a non-linear fuzzy density
function based on Mean Square Error, denoted M-density, is proposed, its formal
definition is introduced as follows:
Definition 9. M-density
Let μ be a fuzzy measure on a finite set { }1 2, ,..., nX x x x= , iy be global response of
subject i and ( )i jf x be the evaluation of subject i for singleton jx , satisfying:
( )0 1, 1,2,..., , 1,2,...,i jf x i N j n< < = = (26)
If ( )
( )
( )
1
1
1
, 1,2,...,
j
M j n
j
j
MSE x
d x j n
MSE x
−
−
=
= =
(27)
Where ( ) ( )( )
2
1
1 N
j i i j
i
MSE x y f x
N =
= − (28)
then the function [ ]: 0,1Md X → satisfying { }( ) ( ),Mx d x x Xμ = ∀ ∈ is a fuzzy
density function, and called M-density of μ .
3.4 Classification of Fuzzy Measures [6-10]
Definition 10. Additive measure, sub-additive measure and supper- additive
measure
(i) A fuzzy measure μ is called an sub-additive measure, if
( ) ( ) ( ), ,A B X A B g A B g A g Bμ μ μφ∀ ⊂ = < + (29)
8. 126 H.-C. Liu
(ii) A fuzzy measure μ is called an additive measure, if
( ) ( ) ( ), ,A B X A B g A B g A g Bμ μ μφ∀ ⊂ = = + (30)
(iii) A fuzzy measure μ is called a supper-additive measure, if
( ) ( ) ( ), ,A B X A B g A B g A g Bμ μ μφ∀ ⊂ = > + (31)
(iv) A fuzzy measure is called a mixed fuzzy measure, if is not a Additive
measure, sub-additive measure and supper- additive measure.
Theorem 1. Let d be a given fuzzy density function of an additive measure, A-
measure, then its measure function [ ]: 2 0,1X
Ag → satisfies
( ) ( )A
x E
E X g E d x
∈
∀ ⊂ = (32)
3.4 λ-Measure [13]
Definition 10. λ-measure [13]
For a given fuzzy density function d on a finite set X, X n= , a measure is called
λ-measure, if its measure function, [ ]: 2 0,1X
gλ → , satisfying:
(i) ( ) ( )0, 1g g Xλ λφ = = (33)
(ii)
( ) ( ) ( ) ( ) ( )
, 2 , ,X
A B A B A B X
g A B g A g B g A g Bλ λ λ λ λ
φ
λ
∈ = ≠
= + +
(34)
(iii) ( ) ( ) { }( )
1
1 1 0,
n
i i i
i
d x d x g xλλ λ
=
+ = + > = ∏ (35)
Theorem 2. Let d be a given fuzzy density function on a finite set X, X n= ,
Under the condition ofλ-measure, the equation (35) determines the parameter λ
uniquely:
(i) ( ) 1 0
x X
d x λ
∈
> < ,λ-measure is a sub-additive measure (36)
(ii) ( ) 1 0
x X
d x λ
∈
= = , λ-measure is an additive measure (37)
(iii) ( ) 1 0
x X
d x λ
∈
< > , λ-measure is a supper-additive measure (38)
9. 6 A Novel Choquet Integral Composition Forecasting Model 127
Note that
(i) λ-measure has just one feasible fuzzy measure satisfies the conditions of
its own definition.
(ii) In equation (35), the value of ( )id x is decided first, and then to find the
solution of the measure parameterλ, and ( )
1
1
n
i
i
d xλ
=
+ ∏ can be viewed
as a function of its fuzzy density ( )id x . Therefore, we can say that λ-
measure is predominate by its fuzzy density function.
(iii) λ-measure can not be a mixed fuzzy measure.
3.5 P-Measure [14]
Definition 11. P-measure [14]
For a given fuzzy density function d on a finite set X, X n= , a measure is called
P-measure, if its measure function, [ ]: 2 0,1X
Pg → , satisfying:
(i) ( ) ( )0, 1P Pg g Xφ = = (39)
(ii) ( ) ( ) { }( )2 max maxX
P P
x A x A
A g A d x g x∀
∈ ∈
∈ = = (40)
Theorem 3. P-measure is always a sub-additive measure [6-10]
Note that since the maximum of any finite set is unique, hence, P-measure has just
one feasible fuzzy measure satisfies the conditions of its own definition.
3.6 Multivalent Fuzzy Measure [6-10]
Definition 12. Univalent fuzzy measure, multivalent fuzzy measure [4-8]
A fuzzy measure is called a univalent fuzzy measure, if it has just one feasible
fuzzy measure satisfies the conditions of its own definition, otherwise, it is called
a multivalent fuzzy measure.
Note that both λ-measure and P-measure are univalent fuzzy measures.
3.7 L-Measure [6-10]
In my previous work [4], a multivalent fuzzy measure was proposed, which is
called L-measure, since my last name is Liu. Its formal definition is as follows
Definition 13. L-measure [6-10]
For a given fuzzy density function d on a finite set X, X n= , a measure is called
L-measure, if its measure function, [ ]: 2 0,1X
Lg → , satisfying:
10. 128 H.-C. Liu
(i) ( ) ( )0, 1L Lg g Xφ = = (41)
(ii) [ ) ( )
( ) ( ) ( )
( ) ( )
1 1 max
0, , max
1
x A
x A
L
x A
x X
A L d x d x
L X A X g d x
n A L A d x
∈
∈
∈
∈
− −
∈ ∞ ≠ ⊂ = +
− + −
(42)
Theorem 4. Important Properties of L-measure [6]
(i) For any [ )L 0,∈ ∞ , L-measure is a multivalent fuzzy measure, in other words,
L-measure has infinite fuzzy measure solutions.
(ii) L-measure is an increasing function on L.
(iii) If L 0= then L-measure is just the P-measure.
(iv) L-measure may be a mixed fuzzy measure
Note that
(i) P-measure is a special case of L-measure
(ii) L-measure does not contain additive measure and λ-measure, in other
words, additive measure and λ-measure are not special cases of L-
measure.
3.8 Extensional L-Measure [7]
For overcoming the drawback of L-measure, an improving multivalent fuzzy
measure which containing additive measure and λ-measure., called extensional L-
measure, was proposed by my next previous paper [7], Its formal definition is as
follows;
Definition 14. Extensional L-measure, LE-measure [7]
For a given fuzzy density function d on a finite set X, X n= , a measure is called
extensional L-measure, if its measure function, [ ]: 2 0,1E
X
Lg → , satisfying:
(i) ( ) ( )0, 1E EL Lg g Xφ = = (43)
(ii)
[ )
( )
( ) ( ) ( ) [ ]
( )
( ) ( ) ( )
( ) ( )
( )
1, ,
1 max , 1,0
1 1
, 0,
1
E
x A
x A
L
x A x A
x A
x X
L A X
L d x L d x L
g A A L d x d x
d x L
n A L A d x
∈
∈
∈ ∈
∈
∈
∈ − ∞ ⊂
+ − ∈ −
= − −
+ ∈ ∞
− + −
(44)
11. 6 A Novel Choquet Integral Composition Forecasting Model 129
Theorem 5. Important Properties of LE –measure [7]
(i) For any [ )L 1,∈ − ∞ , LE -measure is a multivalent fuzzy measure, in other
words, LE-measure has infinite fuzzy measure solutions.
(ii) LE-measure is an increasing function on L.
(iii) if L 1= − then LE-measure is just the P-measure.
(iv) if L 0= then LE-measure is just the additive measure.
(v) if L 0= and ( ) 1
x X
d x
∈
= , then LE-measure is just the λ-measure.
(vi) if -1< L< 0 then LE-measure is a supper-additive measure.
(vii) if L> 0 then LE-measure is a sub-additive measure
Note that additive measure, λ-measure and P-measure are two special cases of LE-
measure.
3.9 B-Measure [7]
For considering to extend the extensional L-measure, a special fuzzy measure was
proposed by my previous work as below;
Definition 15. B-measure [7]
For a given fuzzy density function d, a B-measure, Bg , is a measure on a finite set
X, X n= , satisfying:
( )
( ) 1
1 1
x A
B
d x if A
A X g A
if A
∈
≤
∀ ⊂ =
>
(45)
Theorem 6. Any B-measure is a supper-additive measure.
3.10 Comparison of Two Fuzzy Measures [7-10]
Definition 16. Comparison of two fuzzy measures [7-10]
For a given fuzzy density function, ( )d x , on a finite set, X, let 1μ and 2μ be two
fuzzy measures on X,
(i) If ( ) ( )1 2
, ,g A g A A Xμ μ= ∀ ⊂ , then we say that 1μ -measure is equal to
2μ -measure, denoted as
1 2measure masureμ μ− = − (46)
(ii) If ( ) ( )1 2
, ,1g A g A A X A Xμ μ< ∀ ⊂ < < then we say that 1μ -measure is
less than 2μ -measure, or 2μ -measure is larger than 1μ -measure, denoted as
12. 130 H.-C. Liu
1 2measure masureμ μ− < − (47)
(iii) If ( ) ( )1 2
, ,1g A g A A X A Xμ μ≤ ∀ ⊂ < < , then we say that 1μ -measure
is not larger than 2μ -measure, or 2μ -measure is not smaller than 1μ -
measure, denoted as
1 2measure masureμ μ− ≤ − (48)
.
Theorem 7. For any given fuzzy density function, if measureμ − is a fuzzy
measure, then we have
P measure as meaure B measureμ− ≤ − ≤ − (49)
In other words, for any given fuzzy density function, the P-measure is the smallest
fuzzy measure, and the B-measure is the largest fuzzy measure.
3.11 Completed Fuzzy Measure
Definition 17. Completed fuzzy measure [8]
If the measure function of a multivalent fuzzy measure has continuously infinite
fuzzy measure solutions, and both P-measure and B -measure are its limit fuzzy
measure solutions, then this multivalent fuzzy measure is called a completed fuzzy
measure.
Note that both the L –measure and LE –measure are not completed fuzzy
measures, since
( )
( ) ( )
( )
( )
( 1)
lim 1
1
x A x A
L
x Xx X
A L d x d x
d xn A A L d x
∈ ∈
→∞
∈∈
−
= ≠
− + −
, the B-measure is not a limit fuzzy
measure of the L –measure and LE –measure
3.12 Completed Extensional L-Measure
Definition 18. Completed extensional L-measue, LCE –measure
For a given fuzzy density function d on a finite set X, X n= , a measure is called
extensional L-measure, if its measure function, [ ]: 2 0,1CE
X
Lg → , satisfying:
(i) ( ) ( )0, 1CE CEL Lg g Xφ = = (50)
13. 6 A Novel Choquet Integral Composition Forecasting Model 131
(ii)
[ )
( )
( ) ( ) ( ) [ ]
( )
( ) ( ) ( )
( ) ( ) ( )
( )
1, ,
1 max , 1,0
1 1
, 0,
1
CE
x A
x A
L
x A x A
x A
x X x A
L A X
L d x L d x L
g A A L d x d x
d x L
n A d x L A d x
∈
∈
∈ ∈
∈
∈ ∈
∈ − ∞ ⊂
+ − ∈ −
= − −
+ ∈ ∞
− + −
(51)
Theorem 7. Important Properties of LCE –measure [7]
(i) For any [ )L 1,∈ − ∞ , LCE -measure is a multivalent fuzzy measure, in other
words, LCE -measure has infinite fuzzy measure solutions.
(ii) LCE -measure is an increasing function on L.
(iii) if L 1= − then LCE -measure is just the P-measure.
(iv) if L 0= then LCE -measure is just the additive measure.
(v) if L 0= and ( ) 1
x X
d x
∈
= , then LCE -measure is just the λ-measure.
(vi) if -1< L< 0 then LCE -measure is a sub-additive measure.
(vii) if L> 0 then LCE -measure is a supper-additive measure
(viii) L → ∞ then LCE -measure is a B- measure
(ix) LCE -measure is a completed fuzzy measure.
Note that additive measure, λ-measure, P-measure and B-measure are special
cases of LCE -measure.
3.13 Choquet Integral
Definition 19. Choquet Integral [9-10]
Let μ be a fuzzy measure on a finite set { }1 2, ,..., mX x x x= . The Choquet integral
of :if X R+→ with respect to μ for individual i is denoted by
( )( ) ( )( ) ( )( )1
1
, 1,2,...,
m
i
C i i ij j j
j
f d f x f x A i Nμ μ−
=
= − = (52)
where ( )( )0
0if x = , ( )( )i j
f x indicates that the indices have been permuted so that
( )( ) ( )( ) ( )( )1 2
0 ...i i i m
f x f x f x≤ ≤ ≤ ≤ , ( ) ( ) ( ) ( ){ }1
, ,...,j j j m
A x x x+
= (53)
Note that from Definition 19, for given integrand :if X R+→ , the Choquet
integral can be viewed as a function of the fuzzy measure μ -measure, in other
words, the value of Choquet integral is predominate by its fuzzy measure.
14. 132 H.-C. Liu
Theorem 8. If a λ-measure is a standard fuzzy measure on { }1 2, ,..., mX x x x= ,
and [ ]: 0,1d X → is its fuzzy density function, then the Choquet integral of
:if X R+→ with respect to λ for individual i satisfying
( ) ( )
1
, 1,2,...,
m
C i j i j
j
f d d x f x i Nλ
=
= = (54)
3.14 Choquet Integral Composition Forecasting Model
Definition 20. Choquet Integral Composition Forecasting Model [8]
(i) Let ty be the realized value of target variable at time t,
(ii) Let { }1 2, ,..., mX x x x= be the set of m competing predictors,
(iii) Let :tf X R+→ , ( ) ( ) ( )1 2, ,...,t t t mf x f x f x be m forecasting values of ty by
competing predictors 1 2, ,..., mx x x at time t.
If μ is a fuzzy measure on X , , Rα β ∈ satisfying
( ) ( ),
1
ˆˆ, arg min
N
i C t
t
y f dgμ
α β
α β α β
=
= − −
(55)
1 1
1 1ˆˆ
N N
t t
t t
y f dg
N N
μα β
= =
= − , ˆ yf
ff
S
S
β = (56)
1 1
1 1ˆˆ
N N
t t
t t
y f dg
N N
μα β
= =
= − (57)
1 1 1
1 1
1
N N N
i t t t
t t t
yf
y y f dg f dg
N N
S
N
μ μ
= = =
− −
=
−
(58)
then ˆˆˆ , 1,2,...,t ty f dg t Nμα β= + = is called the Choquet integral regression
composition forecasting estimator of ty , and this model is also called the Choquet
integral regression composition forecasting model with respect to μ -measure.
Theorem 9. If a λ-measure is a standard fuzzy measure then Choquet integral
regression composition forecasting model with respect to λ-measure is just a linear
combination forecasting model.
15. 6 A Novel Choquet Integral Composition Forecasting Model 133
4 Experiments and Results
A real data of the grain production with 3 kinds of forecasted values of the time
series model, the exponential smoothing model and GM(1,1) forecasting model,
respectively, in Jilin during 1952 to 2007 from the paper of Zhang, Wang and Gao
[2],was listed in Table 2. For evaluating the proposed new density based
composition forecasting model, an experiment with the above-mentioned data by
using sequential mean square error was conducted.
We arrange the first 50 years grain production and their 3 kinds of forecasted
values as the training set and the rest data as the forecasting set. And the following
N-density and M-density of all fuzzy measures were used
N-density: { }0.3331, 0.3343, 0.3326 (59)
M-density: { }0.2770, 0.3813, 0.3417 (60)
The performances of Choquet integral composition forecasting model with
extensional L-measure, L-measure, λ-measure and P-measure, respectively, a
ridge regression composition forecasting model and a multiple linear regression
composition forecasting model and the traditional linear weighted composition
forecasting model were compared. The result is listed in Table 1.
Table 1 SMSEs of 2 densities for 7 composition forecasting models
Composition forecasting Models
SMSE
N-density M-density
Choquet integral regression
LCE-measure 13149.64 13217.31
LE-measure 13939.84 13398.29
L-measure 14147.83 13751.60
λ-measure 21576.38 19831.86
P-measure 16734.88 16465.98
Ridge regression 18041.92
Multiple linear regression 24438.29
Table 1 shows that the M-density based Choquet integral composition
forecasting model with respect to LCE-measure outperforms other composition
forecasting models. Furthermore, for each fuzzy measure, including the LCE-
measure, LE-measure, L-measure, λ-measure and P-measure, the M-density based
Choquet integral composition forecasting model is better than the N-density based.
5 Conclusion
In this paper, a new density, M-density, was proposed. Based on M-density, a
novel composition forecasting model was also proposed. For comparing the
16. 134 H.-C. Liu
forecasting efficiency of this new density with the well-known density, N-density,
a real data experiment was conducted. The performances of Choquet integral
composition forecasting model with the completed extensional L-measure,
extensional L-measure, λ-measure and P-measure, by using M-density and N-
density, respectively, a ridge regression composition forecasting model and a
multiple linear regression composition forecasting model and the traditional linear
weighted composition forecasting model were compared. Experimental result
showed that for each fuzzy measure, including the LCE-measure, LE-measure, L-
measure, λ-measure and P-measure, the M-density based Choquet integral
composition forecasting model is better than the N-density based, and the M-
density based Choquet integral composition forecasting model outperforms all of
other composition forecasting models.
Acknowledgment. This study is partially supported by the grant of National Science
Council of Taiwan Government (NSC 100-2511-S-468-001).
References
1. Bates, J.M., Granger, C.W.J.: The Combination of Forecasts. Operations Research
Quarterly 4, 451–468 (1969)
2. Zhang, H.-Q., Wang, B., Gao, L.-B.: Application of Composition Forecasting Model in
the Agricultural Economy Research. Journal of Anhui Agri. Sci. 36(22), 9779–9782
(2008)
3. Hsu, C.-C., Chen, C.-Y.: Applications of improved grey prediction model for power
demand forecasting. Energy Conversion and Management 44, 2241–2249 (2003)
4. Kayacan, E., Ulutas, B., Kaynak, O.: Grey system theory-based models in time series
prediction. Expert Systems with Applications 37, 1784–1789, (2010)
5. Hoerl, A.E., Kenard, R.W., Baldwin, K.F.: Ridge regression: Some simulation.
Communications in Statistics 4(2), 105–123 (1975)
6. Liu, H.-C., Tu, Y.-C., Lin, W.-C., Chen, C.C.: Choquet integral regression model
based on L-Measure and γ-Support. In: Proceedings of 2008 International Conference
on Wavelet Analysis and Pattern Recognition (2008)
7. Liu, H.-C.: Extensional L-Measure Based on any Given Fuzzy Measure and its
Application. In: Proceedings of 2009 CACS International Automatic Control
Conference, November 27-29, pp. 224–229. National Taipei University of
Technology, Taipei Taiwan (2009)
8. Liu, H.-C.: A theoretical approach to the completed L-fuzzy measure. In: Proceedings
of 2009 International Institute of Applied Statistics Studies (IIASS), 2nd Conference,
Qindao, China, July 24-29 (2009)
9. Liu, H.-C., Ou, S.-L., Cheng, Y.-T., Ou, Y.-C., Yu, Y.-K.: A Novel Composition
Forecasting Model Based on Choquet Integral with Respect to Extensional L-Measure.
In: Proceedings of the 19th National Conference on Fuzzy Theory and Its Applications
(2011)
17. 6 A Novel Choquet Integral Composition Forecasting Model 135
10. Liu, H.-C., Ou, S.-L., Tsai, H.-C., Ou, Y.-C., Yu, Y.-K.: A Novel Choquet Integral
Composition Forecasting Model Based on M-Density. In: Pan, J.-S., Chen, S.-M.,
Nguyen, N.T. (eds.) ACIIDS 2012, Part I. LNCS, vol. 7196, pp. 167–176. Springer,
Heidelberg (2012)
11. Choquet, G.: Theory of capacities. Annales de l’Institut Fourier 5, 131–295 (1953)
12. Wang, Z., Klir, G.J.: Fuzzy Measure Theory. Plenum Press, New York (1992)
13. Sugeno, M.: Theory of fuzzy integrals and its applications. Unpublished doctoral
dissertation, Tokyo Institute of Technology, Tokyo, Japan (1974)
14. Zadeh, L.A.: Fuzzy Sets as a Basis for Theory of Possibility. Fuzzy Sets and
Systems 1, 3–28 (1978)
Appendix
Table 2 SMSEs of 2 densities for 6 composition forecasting models
Years Y X1 X2 X3 X4
1952 613.20 490.67 518.60 399.51 472.45
1953 561.45 549.73 570.84 414.09 511.35
1954 530.95 542.83 586.41 429.20 524.94
1955 556.53 549.57 584.31 444.86 530.10
1956 493.64 582.69 591.12 461.09 542.51
1957 429.35 598.64 570.80 477.91 538.81
1958 528.84 610.69 531.14 495.35 524.37
1959 526.60 633.88 540.11 513.43 537.85
1960 394.70 655.07 544.78 532.16 549.04
1961 398.55 672.97 497.45 551.58 531.58
1962 437.16 694.53 465.26 571.71 523.02
1963 501.67 617.26 457.04 592.57 519.94
1964 491.80 738.99 475.53 614.19 547.93
1965 525.10 761.94 484.57 636.61 563.02
1966 597.60 786.18 503.23 659.84 583.82
1967 647.74 810.67 543.38 683.91 616.78
1968 622.15 835.87 589.95 708.87 653.65
1969 498.70 862.13 612.17 734.74 677.54
1970 738.80 889.12 580.22 761.55 672.02
1971 713.05 916.86 647.93 789.33 721.78
19. 6 A Novel Choquet Integral Composition Forecasting Model 137
Table 2 (continued)
2003 2259.60 2456.54 2192.19 2485.10 2321.52
2004 2510.00 2533.38 2254.69 2575.80 2395.57
2005 2581.21 2612.61 2390.11 2669.80 2511.18
2006 2720.00 2694.33 2508.40 2767.20 2618.82
2007 2454.00 2778.60 2560.38 2831.50 2677.95
Y: realized value of target variable
X1: Fitting value of time series model
X2: Fitting value of exponential smoothing model
X3: Fitting value of GM(1,1) model
X4: Fitting value of composition forecasting model