This document presents a fuzzy inventory model to determine the optimal time for an employee to change jobs while minimizing costs. It introduces concepts like real wage, membership functions, and fuzzy nonlinear programming. The model considers costs of decreasing real income, moving to a new job, and income shortages. It uses Lagrange multipliers to solve the fuzzy nonlinear programming problem and compares the results to a crisp model. A numerical example is provided to illustrate the application to manpower planning.
11.fuzzy inventory model with shortages in man power planningAlexander Decker
This document presents a fuzzy inventory model to determine the optimal time for an employee to quit their current job based on factors like the declining real wage over time and costs associated with changing jobs. The model extends an existing economic order quantity (EOQ) model to account for uncertainty in costs using fuzzy set theory. Membership functions are defined to represent the fuzziness of parameters like real income, costs, and constraints. A fuzzy nonlinear programming problem is formulated and solved using Lagrange multipliers to obtain the optimal solution under fuzzy conditions. The results are compared to the classical crisp model and sensitivity analysis is performed.
Error Estimates for Multi-Penalty Regularization under General Source Conditioncsandit
In learning theory, the convergence issues of the regression problem are investigated with
the least square Tikhonov regularization schemes in both the RKHS-norm and the L 2
-norm.
We consider the multi-penalized least square regularization scheme under the general source
condition with the polynomial decay of the eigenvalues of the integral operator. One of the
motivation for this work is to discuss the convergence issues for widely considered manifold
regularization scheme. The optimal convergence rates of multi-penalty regularizer is achieved
in the interpolation norm using the concept of effective dimension. Further we also propose
the penalty balancing principle based on augmented Tikhonov regularization for the choice of
regularization parameters. The superiority of multi-penalty regularization over single-penalty
regularization is shown using the academic example and moon data set.
Application of Graphic LASSO in Portfolio Optimization_Yixuan Chen & Mengxi J...Mengxi Jiang
- The document describes using graphical lasso to estimate the precision matrix of stock returns and apply portfolio optimization.
- Graphical lasso estimates the precision matrix instead of the covariance matrix to allow for sparsity. This makes the estimation more efficient for large datasets.
- The study uses 8 different models to simulate stock return data and compares the performance of graphical lasso, sample covariance, and shrinkage estimators on portfolio optimization of in-sample and out-of-sample test data. Graphical lasso performed best on out-of-sample test data, showing it can generate portfolios that generalize well.
The document provides an overview of convex optimization problems, including linear programming (LP), quadratic programming (QP), quadratic constraint quadratic programming (QCQP), second-order cone programming (SOCP), and geometric programming. It discusses how these problems can be transformed into equivalent convex optimization problems to help solve them. Local optima are guaranteed to be global optima for convex problems. Optimality criteria are presented for problems with differentiable objectives.
Dynamic programming is used to solve optimization problems by breaking them down into overlapping subproblems. It is applicable to problems that exhibit optimal substructure and overlapping subproblems. The matrix chain multiplication problem can be solved using dynamic programming in O(n^3) time by defining the problem recursively, computing the costs of subproblems in a bottom-up manner using dynamic programming, and tracing the optimal solution back from the computed information. Similarly, the longest common subsequence problem exhibits optimal substructure and can be solved using dynamic programming.
Generalization of Tensor Factorization and ApplicationsKohei Hayashi
This document presents two tensor factorization methods: Exponential Family Tensor Factorization (ETF) and Full-Rank Tensor Completion (FTC). ETF generalizes Tucker decomposition by allowing for different noise distributions in the tensor and handles mixed discrete and continuous values. FTC completes missing tensor values without reducing dimensionality by kernelizing Tucker decomposition. The document outlines these methods and their motivations, discusses Tucker decomposition, and provides an example applying ETF to anomaly detection in time series sensor data.
In this paper, we propose four algorithms for L1 norm computation of regression parameters, where two of them are more efficient for simple and multiple regression models. However, we start with restricted simple linear regression and corresponding derivation and computation of the weighted median problem. In this respect, a computing function is coded. With discussion on the m parameters model, we continue to expand the algorithm to include unrestricted simple linear regression, and two crude and efficient algorithms are proposed. The procedures are then generalized to the m parameters model by presenting two new algorithms, where the algorithm 4 is selected as more efficient. Various properties of these algorithms are discussed.
Gentle Introduction to Dirichlet ProcessesYap Wooi Hen
This document provides an introduction to Dirichlet processes. It begins by motivating the need for nonparametric clustering when the number of clusters in the data is unknown. It then provides an overview of Dirichlet processes and discusses them from multiple perspectives, including samples from a Dirichlet process, the Chinese restaurant process representation, stick breaking construction, and formal definition. It also covers Dirichlet process mixtures and common inference techniques like Markov chain Monte Carlo and variational inference.
11.fuzzy inventory model with shortages in man power planningAlexander Decker
This document presents a fuzzy inventory model to determine the optimal time for an employee to quit their current job based on factors like the declining real wage over time and costs associated with changing jobs. The model extends an existing economic order quantity (EOQ) model to account for uncertainty in costs using fuzzy set theory. Membership functions are defined to represent the fuzziness of parameters like real income, costs, and constraints. A fuzzy nonlinear programming problem is formulated and solved using Lagrange multipliers to obtain the optimal solution under fuzzy conditions. The results are compared to the classical crisp model and sensitivity analysis is performed.
Error Estimates for Multi-Penalty Regularization under General Source Conditioncsandit
In learning theory, the convergence issues of the regression problem are investigated with
the least square Tikhonov regularization schemes in both the RKHS-norm and the L 2
-norm.
We consider the multi-penalized least square regularization scheme under the general source
condition with the polynomial decay of the eigenvalues of the integral operator. One of the
motivation for this work is to discuss the convergence issues for widely considered manifold
regularization scheme. The optimal convergence rates of multi-penalty regularizer is achieved
in the interpolation norm using the concept of effective dimension. Further we also propose
the penalty balancing principle based on augmented Tikhonov regularization for the choice of
regularization parameters. The superiority of multi-penalty regularization over single-penalty
regularization is shown using the academic example and moon data set.
Application of Graphic LASSO in Portfolio Optimization_Yixuan Chen & Mengxi J...Mengxi Jiang
- The document describes using graphical lasso to estimate the precision matrix of stock returns and apply portfolio optimization.
- Graphical lasso estimates the precision matrix instead of the covariance matrix to allow for sparsity. This makes the estimation more efficient for large datasets.
- The study uses 8 different models to simulate stock return data and compares the performance of graphical lasso, sample covariance, and shrinkage estimators on portfolio optimization of in-sample and out-of-sample test data. Graphical lasso performed best on out-of-sample test data, showing it can generate portfolios that generalize well.
The document provides an overview of convex optimization problems, including linear programming (LP), quadratic programming (QP), quadratic constraint quadratic programming (QCQP), second-order cone programming (SOCP), and geometric programming. It discusses how these problems can be transformed into equivalent convex optimization problems to help solve them. Local optima are guaranteed to be global optima for convex problems. Optimality criteria are presented for problems with differentiable objectives.
Dynamic programming is used to solve optimization problems by breaking them down into overlapping subproblems. It is applicable to problems that exhibit optimal substructure and overlapping subproblems. The matrix chain multiplication problem can be solved using dynamic programming in O(n^3) time by defining the problem recursively, computing the costs of subproblems in a bottom-up manner using dynamic programming, and tracing the optimal solution back from the computed information. Similarly, the longest common subsequence problem exhibits optimal substructure and can be solved using dynamic programming.
Generalization of Tensor Factorization and ApplicationsKohei Hayashi
This document presents two tensor factorization methods: Exponential Family Tensor Factorization (ETF) and Full-Rank Tensor Completion (FTC). ETF generalizes Tucker decomposition by allowing for different noise distributions in the tensor and handles mixed discrete and continuous values. FTC completes missing tensor values without reducing dimensionality by kernelizing Tucker decomposition. The document outlines these methods and their motivations, discusses Tucker decomposition, and provides an example applying ETF to anomaly detection in time series sensor data.
In this paper, we propose four algorithms for L1 norm computation of regression parameters, where two of them are more efficient for simple and multiple regression models. However, we start with restricted simple linear regression and corresponding derivation and computation of the weighted median problem. In this respect, a computing function is coded. With discussion on the m parameters model, we continue to expand the algorithm to include unrestricted simple linear regression, and two crude and efficient algorithms are proposed. The procedures are then generalized to the m parameters model by presenting two new algorithms, where the algorithm 4 is selected as more efficient. Various properties of these algorithms are discussed.
Gentle Introduction to Dirichlet ProcessesYap Wooi Hen
This document provides an introduction to Dirichlet processes. It begins by motivating the need for nonparametric clustering when the number of clusters in the data is unknown. It then provides an overview of Dirichlet processes and discusses them from multiple perspectives, including samples from a Dirichlet process, the Chinese restaurant process representation, stick breaking construction, and formal definition. It also covers Dirichlet process mixtures and common inference techniques like Markov chain Monte Carlo and variational inference.
This document introduces tensors through examples. It defines a vector as a rank 1 tensor and a matrix as a rank 2 tensor. It then provides an example of a rank 3 tensor. The document discusses how to define an inner product between tensors and provides examples using vectors and matrices. It also gives an example of how derivatives of a function can produce tensors of different ranks. Finally, it introduces the concept of decomposing matrices into their symmetric and antisymmetric parts.
This document summarizes a talk given by Yoshihiro Mizoguchi on developing a Coq library for relational calculus. The talk introduces relational calculus and its applications. It describes implementing definitions and proofs about relations, Boolean algebras, relation algebras, and Dedekind categories in Coq. The library provides a formalization of basic notions in relational theory and can be used to formally verify properties of relations and prove theorems automatedly.
This document discusses various importance sampling methods for approximating Bayes factors, which are used for Bayesian model selection. It compares regular importance sampling, bridge sampling, harmonic means, mixtures to bridge sampling, and Chib's solution. An example application to probit modeling of diabetes in Pima Indian women is presented to illustrate regular importance sampling. Markov chain Monte Carlo methods like the Metropolis-Hastings algorithm and Gibbs sampling can be used to sample from the probit models.
This document discusses Bayesian model comparison in cosmology using population Monte Carlo methods. It provides background on key questions in cosmology that can be addressed using cosmic microwave background data from experiments like WMAP and Planck. Population Monte Carlo and adaptive importance sampling methods are introduced to help approximate Bayesian evidence for different cosmological models given the immense computational challenges of working with this cosmological data.
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
This paper demonstrates the use of liner programming methods in order to determine the optimal product mix for
profit maximization. There had been several papers written to demonstrate the use of linear programming in
finding the optimal product mix in various organization. This paper is aimed to show the generic approach to be
taken to find the optimal product mix.
This document discusses category theory and algebraic semantics for programming languages. It defines categories, functors, natural transformations, Ω-algebras, (Ω,E)-algebras, and monads. Ω-algebras allow modeling algebraic structures like groups. (Ω,E)-algebras satisfy equations E and have a free algebra. Monads correspond to algebraic theories and define Kleisli categories of algebras. Algebraic semantics uses axioms and rules to prove equalities in (Ω,E)-algebras.
Tensor Decomposition and its ApplicationsKeisuke OTAKI
This document discusses tensor factorizations and decompositions and their applications in data mining. It introduces tensors as multi-dimensional arrays and covers 2nd order tensors (matrices) and 3rd order tensors. It describes how tensor decompositions like the Tucker model and CANDECOMP/PARAFAC (CP) model can be used to decompose tensors into core elements to interpret data. It also discusses singular value decomposition (SVD) as a way to decompose matrices and reduce dimensions while approximating the original matrix.
This document summarizes Hill's method for numerically approximating the eigenvalues and eigenfunctions of differential operators. Hill's method has two main steps:
1. Perform a Floquet-Bloch decomposition to reduce the problem from the real line to the interval [0,L] with periodic boundary conditions, parameterized by the Floquet exponent μ. This gives an operator with a compact resolvent.
2. Approximate the solutions by Fourier series, reducing the problem to a matrix eigenvalue problem that can be solved numerically.
The method is straightforward to implement and effective for various problems involving differential operators on the real line or with periodic boundary conditions. Convergence rates and error bounds for Hill's method are also presented.
We propose a regularized method for multivariate linear regression when the number of predictors may exceed the sample size. This method is designed to strengthen the estimation and the selection of the relevant input features with three ingredients: it takes advantage of the dependency pattern between the responses by estimating the residual covariance; it performs selection on direct links between predictors and responses; and selection is driven by prior structural information. To this end, we build on a recent reformulation of the multivariate linear regression model to a conditional Gaussian graphical model and propose a new regularization scheme accompanied with an efficient optimization procedure. On top of showing very competitive performance on artificial and real data sets, our method demonstrates capabilities for fine interpretation of its parameters, as illustrated in applications to genetics, genomics and spectroscopy.
The document contains 7 questions related to probability and statistics. Question 1 asks about computing a 95% confidence interval for a population mean and reducing the error in estimating the population mean. Question 2 asks about the number of amplifiers needed to achieve 95% reliability for a concert lasting 2 hours. Question 3 asks about the probability of a system meeting certain tolerance limits on diameters and the probability of none among randomly selected systems violating tolerances.
This document provides an overview of brute force and divide-and-conquer algorithms. It discusses various brute force algorithms like computing an, string matching, closest pair problem, convex hull problems, and exhaustive search algorithms like the traveling salesman problem and knapsack problem. It also analyzes the time efficiency of these brute force algorithms. The document then discusses the divide-and-conquer approach and provides examples like merge sort, quicksort, and matrix multiplication. It provides pseudocode and analysis for mergesort. In summary, the document covers brute force and divide-and-conquer techniques for solving algorithmic problems.
This document is the final exam for ENGR 371 - Probability and Statistics given on April 29, 2010 at Concordia University. It contains 6 questions testing concepts like probability, confidence intervals, hypothesis testing, and distributions. Formulas relevant to the exam questions are also provided.
This document provides a summary and critique of the local volatility model. It begins by briefly recalling the construction of the local volatility concept, which aims to find an implied volatility function that matches option prices. However, the document argues that the local volatility model makes an error by replacing the real stock process with an auxiliary process, as they are defined on different coordinate spaces. While the goal of eliminating discrepancies between real and implied volatility is reasonable, the implementation of the local volatility concept ignores important initial conditions. Overall, the document presents both the theoretical basis of local volatility and a critical viewpoint of its mathematical derivation.
Principal component analysis and matrix factorizations for learning (part 3) ...zukun
Nonnegative matrix factorization (NMF) decomposes a data matrix into two nonnegative matrices and can be shown to be equivalent to k-means clustering and spectral clustering. NMF provides parts-based representations of data and soft clustering. Experiments on text datasets demonstrated NMF achieved better clustering accuracy than k-means.
The Probability that a Matrix of Integers Is DiagonalizableJay Liew
The Probability that a
Matrix of Integers Is Diagonalizable
Andrew J. Hetzel, Jay S. Liew, and Kent E. Morrison
1. INTRODUCTION. It is natural to use integer matrices for examples and exercises
when teaching a linear algebra course, or, for that matter, when writing a textbook in
the subject. After all, integer matrices offer a great deal of algebraic simplicity for particular
problems. This, in turn, lets students focus on the concepts. Of course, to insist
on integer matrices exclusively would certainly give the wrong idea about many important
concepts. For example, integer matrices with integer matrix inverses are quite
rare, although invertible integer matrices (over the rational numbers) are relatively
common. In this article, we focus on the property of diagonalizability for integer matrices
and pose the question of the likelihood that an integer matrix is diagonalizable.
Specifically, we ask: What is the probability that an n × n matrix with integer entries is
diagonalizable over the complex numbers, the real numbers, and the rational numbers,
respectively?
Time series analysis, modeling and applicationsSpringer
- The document proposes a novel composition forecasting model based on the Choquet integral with respect to a completed extensional L-measure and M-density fuzzy measure.
- It compares the performance of this model to other composition forecasting models, including ones based on extensional L-measure, L-measure, lambda-measure, and P-measure, as well as ridge regression and multiple linear regression models.
- Experimental results on grain production time series data show that the proposed Choquet integral composition forecasting model with completed extensional L-measure and M-density outperforms the other models.
This document discusses using the Wasserstein distance for inference in generative models. It begins by introducing ABC methods that use a distance between samples to compare observed and simulated data. It then discusses using the Wasserstein distance as an alternative distance metric that has lower variance than the Euclidean distance. The document covers computational aspects of calculating the Wasserstein distance, asymptotic properties of minimum Wasserstein estimators, and applications to time series data.
Numerical smoothing and hierarchical approximations for efficient option pric...Chiheb Ben Hammouda
1. The document presents a numerical smoothing technique to improve the efficiency of option pricing and density estimation when analytic smoothing is not possible.
2. The technique involves numerically determining discontinuities in the integrand and computing the integral only over the smooth regions. It also uses hierarchical representations and Brownian bridges to reduce the effective dimension of the problem.
3. The numerical smoothing approach outperforms Monte Carlo methods for high dimensional cases and improves the complexity of multilevel Monte Carlo from O(TOL^-2.5) to O(TOL^-2 log(TOL)^2).
The document discusses human resource planning and management, including topics such as manpower utilization, forecasting techniques, scheduling methods, factors that affect job performance, and human resource accounting. It provides details on how to improve manpower utilization, different types of forecasts and scheduling tools, environmental and individual factors that impact performance, and methods for assigning economic value to human resources through accounting. The goal of human resource planning and accounting is to optimize the utilization of employee skills and talents and effectively manage human capital as a valuable organizational asset.
The document discusses India's growing healthcare industry and shortage of healthcare professionals. It notes that India faces shortages of around 6 lakh doctors, 15 lakh nurses, and 2 lakh dental surgeons. Indian nurses are in high demand globally, especially in Europe which faces severe shortfalls. Experts estimate the overseas demand for Indian nurses to be between 30,000 to 50,000, with an immediate demand of 18,000 nurses in Britain alone. Effective health manpower planning is needed to match supply and demand of healthcare workers in India and address issues like migration that impacts domestic staffing.
This document introduces tensors through examples. It defines a vector as a rank 1 tensor and a matrix as a rank 2 tensor. It then provides an example of a rank 3 tensor. The document discusses how to define an inner product between tensors and provides examples using vectors and matrices. It also gives an example of how derivatives of a function can produce tensors of different ranks. Finally, it introduces the concept of decomposing matrices into their symmetric and antisymmetric parts.
This document summarizes a talk given by Yoshihiro Mizoguchi on developing a Coq library for relational calculus. The talk introduces relational calculus and its applications. It describes implementing definitions and proofs about relations, Boolean algebras, relation algebras, and Dedekind categories in Coq. The library provides a formalization of basic notions in relational theory and can be used to formally verify properties of relations and prove theorems automatedly.
This document discusses various importance sampling methods for approximating Bayes factors, which are used for Bayesian model selection. It compares regular importance sampling, bridge sampling, harmonic means, mixtures to bridge sampling, and Chib's solution. An example application to probit modeling of diabetes in Pima Indian women is presented to illustrate regular importance sampling. Markov chain Monte Carlo methods like the Metropolis-Hastings algorithm and Gibbs sampling can be used to sample from the probit models.
This document discusses Bayesian model comparison in cosmology using population Monte Carlo methods. It provides background on key questions in cosmology that can be addressed using cosmic microwave background data from experiments like WMAP and Planck. Population Monte Carlo and adaptive importance sampling methods are introduced to help approximate Bayesian evidence for different cosmological models given the immense computational challenges of working with this cosmological data.
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
This paper demonstrates the use of liner programming methods in order to determine the optimal product mix for
profit maximization. There had been several papers written to demonstrate the use of linear programming in
finding the optimal product mix in various organization. This paper is aimed to show the generic approach to be
taken to find the optimal product mix.
This document discusses category theory and algebraic semantics for programming languages. It defines categories, functors, natural transformations, Ω-algebras, (Ω,E)-algebras, and monads. Ω-algebras allow modeling algebraic structures like groups. (Ω,E)-algebras satisfy equations E and have a free algebra. Monads correspond to algebraic theories and define Kleisli categories of algebras. Algebraic semantics uses axioms and rules to prove equalities in (Ω,E)-algebras.
Tensor Decomposition and its ApplicationsKeisuke OTAKI
This document discusses tensor factorizations and decompositions and their applications in data mining. It introduces tensors as multi-dimensional arrays and covers 2nd order tensors (matrices) and 3rd order tensors. It describes how tensor decompositions like the Tucker model and CANDECOMP/PARAFAC (CP) model can be used to decompose tensors into core elements to interpret data. It also discusses singular value decomposition (SVD) as a way to decompose matrices and reduce dimensions while approximating the original matrix.
This document summarizes Hill's method for numerically approximating the eigenvalues and eigenfunctions of differential operators. Hill's method has two main steps:
1. Perform a Floquet-Bloch decomposition to reduce the problem from the real line to the interval [0,L] with periodic boundary conditions, parameterized by the Floquet exponent μ. This gives an operator with a compact resolvent.
2. Approximate the solutions by Fourier series, reducing the problem to a matrix eigenvalue problem that can be solved numerically.
The method is straightforward to implement and effective for various problems involving differential operators on the real line or with periodic boundary conditions. Convergence rates and error bounds for Hill's method are also presented.
We propose a regularized method for multivariate linear regression when the number of predictors may exceed the sample size. This method is designed to strengthen the estimation and the selection of the relevant input features with three ingredients: it takes advantage of the dependency pattern between the responses by estimating the residual covariance; it performs selection on direct links between predictors and responses; and selection is driven by prior structural information. To this end, we build on a recent reformulation of the multivariate linear regression model to a conditional Gaussian graphical model and propose a new regularization scheme accompanied with an efficient optimization procedure. On top of showing very competitive performance on artificial and real data sets, our method demonstrates capabilities for fine interpretation of its parameters, as illustrated in applications to genetics, genomics and spectroscopy.
The document contains 7 questions related to probability and statistics. Question 1 asks about computing a 95% confidence interval for a population mean and reducing the error in estimating the population mean. Question 2 asks about the number of amplifiers needed to achieve 95% reliability for a concert lasting 2 hours. Question 3 asks about the probability of a system meeting certain tolerance limits on diameters and the probability of none among randomly selected systems violating tolerances.
This document provides an overview of brute force and divide-and-conquer algorithms. It discusses various brute force algorithms like computing an, string matching, closest pair problem, convex hull problems, and exhaustive search algorithms like the traveling salesman problem and knapsack problem. It also analyzes the time efficiency of these brute force algorithms. The document then discusses the divide-and-conquer approach and provides examples like merge sort, quicksort, and matrix multiplication. It provides pseudocode and analysis for mergesort. In summary, the document covers brute force and divide-and-conquer techniques for solving algorithmic problems.
This document is the final exam for ENGR 371 - Probability and Statistics given on April 29, 2010 at Concordia University. It contains 6 questions testing concepts like probability, confidence intervals, hypothesis testing, and distributions. Formulas relevant to the exam questions are also provided.
This document provides a summary and critique of the local volatility model. It begins by briefly recalling the construction of the local volatility concept, which aims to find an implied volatility function that matches option prices. However, the document argues that the local volatility model makes an error by replacing the real stock process with an auxiliary process, as they are defined on different coordinate spaces. While the goal of eliminating discrepancies between real and implied volatility is reasonable, the implementation of the local volatility concept ignores important initial conditions. Overall, the document presents both the theoretical basis of local volatility and a critical viewpoint of its mathematical derivation.
Principal component analysis and matrix factorizations for learning (part 3) ...zukun
Nonnegative matrix factorization (NMF) decomposes a data matrix into two nonnegative matrices and can be shown to be equivalent to k-means clustering and spectral clustering. NMF provides parts-based representations of data and soft clustering. Experiments on text datasets demonstrated NMF achieved better clustering accuracy than k-means.
The Probability that a Matrix of Integers Is DiagonalizableJay Liew
The Probability that a
Matrix of Integers Is Diagonalizable
Andrew J. Hetzel, Jay S. Liew, and Kent E. Morrison
1. INTRODUCTION. It is natural to use integer matrices for examples and exercises
when teaching a linear algebra course, or, for that matter, when writing a textbook in
the subject. After all, integer matrices offer a great deal of algebraic simplicity for particular
problems. This, in turn, lets students focus on the concepts. Of course, to insist
on integer matrices exclusively would certainly give the wrong idea about many important
concepts. For example, integer matrices with integer matrix inverses are quite
rare, although invertible integer matrices (over the rational numbers) are relatively
common. In this article, we focus on the property of diagonalizability for integer matrices
and pose the question of the likelihood that an integer matrix is diagonalizable.
Specifically, we ask: What is the probability that an n × n matrix with integer entries is
diagonalizable over the complex numbers, the real numbers, and the rational numbers,
respectively?
Time series analysis, modeling and applicationsSpringer
- The document proposes a novel composition forecasting model based on the Choquet integral with respect to a completed extensional L-measure and M-density fuzzy measure.
- It compares the performance of this model to other composition forecasting models, including ones based on extensional L-measure, L-measure, lambda-measure, and P-measure, as well as ridge regression and multiple linear regression models.
- Experimental results on grain production time series data show that the proposed Choquet integral composition forecasting model with completed extensional L-measure and M-density outperforms the other models.
This document discusses using the Wasserstein distance for inference in generative models. It begins by introducing ABC methods that use a distance between samples to compare observed and simulated data. It then discusses using the Wasserstein distance as an alternative distance metric that has lower variance than the Euclidean distance. The document covers computational aspects of calculating the Wasserstein distance, asymptotic properties of minimum Wasserstein estimators, and applications to time series data.
Numerical smoothing and hierarchical approximations for efficient option pric...Chiheb Ben Hammouda
1. The document presents a numerical smoothing technique to improve the efficiency of option pricing and density estimation when analytic smoothing is not possible.
2. The technique involves numerically determining discontinuities in the integrand and computing the integral only over the smooth regions. It also uses hierarchical representations and Brownian bridges to reduce the effective dimension of the problem.
3. The numerical smoothing approach outperforms Monte Carlo methods for high dimensional cases and improves the complexity of multilevel Monte Carlo from O(TOL^-2.5) to O(TOL^-2 log(TOL)^2).
The document discusses human resource planning and management, including topics such as manpower utilization, forecasting techniques, scheduling methods, factors that affect job performance, and human resource accounting. It provides details on how to improve manpower utilization, different types of forecasts and scheduling tools, environmental and individual factors that impact performance, and methods for assigning economic value to human resources through accounting. The goal of human resource planning and accounting is to optimize the utilization of employee skills and talents and effectively manage human capital as a valuable organizational asset.
The document discusses India's growing healthcare industry and shortage of healthcare professionals. It notes that India faces shortages of around 6 lakh doctors, 15 lakh nurses, and 2 lakh dental surgeons. Indian nurses are in high demand globally, especially in Europe which faces severe shortfalls. Experts estimate the overseas demand for Indian nurses to be between 30,000 to 50,000, with an immediate demand of 18,000 nurses in Britain alone. Effective health manpower planning is needed to match supply and demand of healthcare workers in India and address issues like migration that impacts domestic staffing.
Manpower planning is the process by which a firm ensures it has the right number and type of employees at the right place and time to perform tasks for which they are economically suitable. It involves forecasting future personnel needs, analyzing current staffing, and developing programs to address gaps. The key steps are setting organizational objectives, forecasting human resource needs and supply, identifying gaps, and creating action plans to recruit, train, and develop employees to meet future requirements. Accurate manpower planning helps organizations adapt to changes, develop talent, utilize staff optimally, and increase investments in human capital.
The document outlines the process for man power planning and hiring at a company. It involves analyzing current job descriptions and requirements, requisitioning for new positions internally or through external advertising, collecting and validating candidate resumes from various sources, shortlisting and testing candidates through written tests, technical and HR interviews, reference and stress checks, negotiating contracts and salaries, and onboarding selected candidates.
Internship report on janata bank limited and its general banking activitiesAkash Kumar Ghosh
This internship report summarizes the author's 3-month internship at Janata Bank Limited in Bangladesh. It provides an overview of the bank, including its background, products/services and operational areas. The report consists of 7 chapters that describe the general banking activities observed, lessons learned, analysis/findings, and recommendations. The author's main focus was the bank's general banking section, where they gained practical experience of the bank's core operations under supervision.
Manpower planning involves forecasting future personnel needs, assessing current staffing levels, and developing strategies to ensure the right number and types of employees are available at the right times. It is a systematic process that promotes optimal use of human resources, continuous staffing, and flexibility to adapt to changing needs or circumstances. The goal is to link business and operational strategies by maintaining an appropriate balance between future workforce supply and demand.
Human resource planning is a process to ensure an organization has the right number and type of people at the right time. It involves forecasting future human resource needs, analyzing current supply, and implementing programs to address imbalances. Key aspects of HRP include forecasting demand and supply, balancing the two through programming, implementing plans, and controlling and evaluating outcomes. The goal is to maximize returns on human resource investments and ensure strategic objectives are met.
Existence of Solutions of Fractional Neutral Integrodifferential Equations wi...inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The document presents a method for solving fuzzy assignment problems using triangular and trapezoidal fuzzy numbers. It formulates the fuzzy assignment problem into a crisp linear programming problem that can be solved using the Hungarian method. The paper also uses Robust's ranking method to transform fuzzy costs into crisp values, allowing conventional solution methods to be applied. It aims to provide a more realistic approach to assignment problems by considering costs as fuzzy numbers rather than deterministic values.
An Approach For Solving Nonlinear Programming ProblemsMary Montoya
This document presents an approach for solving nonlinear programming problems using measure theory. It begins by transforming a nonlinear programming problem into an optimal control problem by treating the variables as time-varying and integrating the objective and constraint functions. It then solves the optimal control problem using measure theory by representing the control-trajectory pair as a positive Radon measure on the space of trajectories and controls. Finally, it shows that the optimal solution to the transformed optimal control problem provides an approximate optimal solution to the original nonlinear programming problem.
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document presents a method for solving fuzzy assignment problems where costs are represented by linguistic variables and fuzzy numbers. Linguistic variables are used to convert qualitative cost data into quantitative fuzzy numbers. Yager's ranking method is applied to rank the fuzzy numbers, transforming the fuzzy assignment problem into a crisp one. The resulting crisp problem is then solved using the Hungarian method to find the optimal assignment that minimizes total cost. A numerical example demonstrates the approach, showing a fuzzy cost matrix converted to crisp values and solved. The method allows handling assignment problems with imprecise, qualitative cost data using fuzzy logic concepts.
Using Alpha-cuts and Constraint Exploration Approach on Quadratic Programming...TELKOMNIKA JOURNAL
In this paper, we propose a computational procedure to find the optimal solution of quadratic programming
problems by using fuzzy -cuts and constraint exploration approach. We solve the problems in
the original form without using any additional information such as Lagrange’s multiplier, slack, surplus and
artificial variable. In order to find the optimal solution, we divide the calculation in two stages. In the first
stage, we determine the unconstrained minimization of the quadratic programming problem (QPP) and check
its feasibility. By unconstrained minimization we identify the violated constraints and focus our searching in
these constraints. In the second stage, we explored the feasible region along side the violated constraints
until the optimal point is achieved. A numerical example is included in this paper to illustrate the capability of
-cuts and constraint exploration to find the optimal solution of QPP.
On estimating the integrated co volatility usingkkislas
This document proposes a method to estimate the integrated co-volatility of two asset prices using high-frequency data that contains both microstructure noise and jumps.
It considers two cases - when the jump processes of the two assets are independent, and when they are dependent. For the independent case, it proposes an estimator that is robust to jumps. For the dependent case, it proposes a threshold estimator that combines pre-averaging to remove noise with a threshold method to reduce the effect of jumps. It proves the estimators are consistent and establishes their central limit theorems. Simulation results are also presented to illustrate the performance of the proposed methods.
This document describes a quadratic assignment problem (QAP) involving assigning 358 constraints and 50 variables. It provides an example of a QAP with 3 facilities and 3 locations. The QAP aims to assign facilities to locations in a way that minimizes total cost, which is a function of the flow between facilities and the distance between locations. Several applications of QAP are discussed, including facility location, scheduling, and ergonomic design problems.
Cubic convolution interpolation is a new technique for resampling discrete data that has several desirable features for image processing. It can be performed efficiently on a digital computer. The cubic convolution interpolation function converges uniformly to the function being interpolated as the sampling increment approaches zero, achieving third-order accuracy. The paper derives the one-dimensional cubic convolution interpolation function and shows how it can be extended separably to two dimensions for interpolating image data.
The document presents the application of the reduced differential transform method (RDTM) to solve time-fractional order beam and beam-like equations. RDTM is used to obtain approximate analytical solutions to these equations in the form of a series with few computations. Three test problems are solved to validate the method. The solutions obtained by RDTM for the test problems agree well with the exact solutions.
A common random fixed point theorem for rational inequality in hilbert spaceAlexander Decker
This document presents a common random fixed point theorem for four continuous random operators defined on a non-empty closed subset of a separable Hilbert space. It begins with introducing basic concepts such as separable Hilbert spaces, random operators, and common random fixed points. It then defines a condition (A) that the four mappings must satisfy. The main result is Theorem 2.1, which proves the existence of a unique common random fixed point for the four operators under condition (A) and a rational inequality condition. The proof constructs a sequence of measurable functions and shows it converges to the common random fixed point. This establishes the common random fixed point theorem for these operators.
Redundancy in robot manipulators and multi robot systemsSpringer
This document summarizes how variational methods have been used to analyze three classes of snakelike robots: (1) hyper-redundant manipulators guided by backbone curves, (2) flexible steerable needles, and (3) concentric tube continuum robots. For hyper-redundant manipulators, variational methods are used to constrain degrees of freedom and provide redundancy resolution. For steerable needles, they generate optimal open-loop plans and model needle-tissue interactions. For concentric tube robots, they determine equilibrium conformations dictated by elastic mechanics principles. The document reviews these applications and illustrates how variational methods provide a natural analysis tool for various snakelike robots.
Partitioning procedures for solving mixed-variables programming problemsSSA KPI
This document presents two procedures for solving mixed-variables programming problems. The key steps are:
1. The problem is partitioned into two subproblems - a programming problem defined on the set S, and a linear programming problem defined in Rp.
2. A partitioning theorem shows that solving the programming problem on S determines whether the original problem is feasible, feasible but not optimal, or determines an optimal solution and reduces the original problem to a linear programming problem.
3. To efficiently solve the programming problem on S without computing the entire constraint set, the problem is decomposed into subproblems where only constraints determining optimality need to be identified.
I am Charles B. I am a Programming Exam Expert at programmingexamhelp.com. I hold a Ph.D. in Programming Texas University, USA. I have been helping students with their exams for the past 9 years. You can hire me to take your exam in Programming.
Visit programmingexamhelp.com or email support@programmingexamhelp.com. You can also call on +1 678 648 4277 for any assistance with the Programming Exam.
A New Method To Solve Interval Transportation ProblemsSara Alvarez
This document summarizes a research paper that proposes a new method for solving interval transportation problems (ITPs), where the costs, supplies, and demands are represented by intervals rather than precise values. The method transforms the ITP into an equivalent bi-objective problem that minimizes the left limit and width of the interval cost function. Fuzzy programming is then used to obtain the solution to the bi-objective problem. Twenty test problems are solved using the proposed method and compared to an existing method, showing the new approach provides better solutions for 11 of the problems.
Fixed Point Theorem of Compatible of Type (R) Using Implicit Relation in Fuzz...IRJET Journal
This document presents a theorem on the existence of a common fixed point for compatible mappings of type (R) in a fuzzy metric space.
The document begins with definitions of key concepts such as fuzzy metric spaces, Cauchy sequences, limits, compatibility, and compatibility of type (R). It then states a theorem that if mappings A, B, S, and T satisfy certain conditions, including being compatible of type (R) and satisfying an implicit relation, then they have a unique common fixed point.
The conditions and proof of the theorem are then provided. The proof constructs a Cauchy sequence and uses properties of the mappings and space like completeness to show the sequence converges to a common fixed point of the mappings.
11.solution of a subclass of singular second orderAlexander Decker
This document presents a method for solving a subclass of singular second order differential equations of Lane-Emden type using Taylor series. The method identifies conditions where the equations can be solved analytically as a polynomial function. Several examples from literature are provided and solved using the proposed Taylor series method to demonstrate its effectiveness. The method produces exact solutions in polynomial form and is shown to perform well on both linear and nonlinear boundary value problems.
This document discusses using the Taylor series method to solve a subclass of singular second order differential equations of Lane-Emden type.
The method is applied to solve boundary value problems that arise from modeling physical phenomena. The Taylor series method produces an analytic solution in the form of a polynomial function.
Several examples from literature are considered to demonstrate the suitability and viability of the Taylor series method for solving these types of differential equations.
11.generalized and subset integrated autoregressive moving average bilinear t...Alexander Decker
This document proposes generalized integrated autoregressive moving average bilinear (GBL) time series models and subset generalized integrated autoregressive moving average bilinear (GSBL) models to achieve stationary for all nonlinear time series. It presents the models' formulations and discusses their properties including stationary, convergence, and parameter estimation. An algorithm is provided to fit the one-dimensional models. The generalized models are applied to Wolfer sunspot numbers and the GBL model is found to perform better than the GSBL model.
Similar to Fuzzy inventory model with shortages in man power planning (20)
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Fuzzy inventory model with shortages in man power planning
1. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.1, No.1, 2011
Fuzzy Inventory Model with Shortages in Man Power Planning
Punniakrishnan.K1* Kadambavanam.K2
1. Associate Professor in Mathematics, Sri Vasavi College, Erode, Tamilnadu, India.
2. Associate Professor in Mathematics, Sri Vasavi College, Erode, Tamilnadu, India.
* E-mail of the corresponding author: punniakrishnan@gmail.com
Abstract
In this paper, an EOQ (Economical Order Quitting) model with shortages (of employees) can be studied.
The cost due to decrease in real wage and the cost involved in moving to a new job are considered, with a
constraint that, the decrease in the real income over a period of time is limited. In real life, these costs are
uncertain to a certain extent. This uncertainty has been discussed by utilizing the concept of fuzzy set
theory. Fuzzy non-linear programming technique using Lagrange multipliers is used to solve the problems
in this model. The application of this model in man power planning is illustrated by means of a numerical
example. The variations of the results with those of the crisp model have been compared. Further the
sensitivity analysis is also presented.
Keywords: Inventory, Economical Order Quitting, Real Wage, Fuzzy Sets, Man Power Planning,
Membership Function, Sensitivity Analysis.
1. Introduction
In the world, many researchers have worked on the EOQ model after the publication of classical lot-size
formula by Haris in 1915. At present, one of the most promising reliable fields of research is recognized as
fuzzy mathematical programming. Glenn.T.Wilson’s[4] square Root Rule for employment change has been
applied to develop an EOQ model in Man Power Planning to obtain the optimal time of quitting the present
job for an employee of an organization based on the condition that the salary increase in the new job is
equal to the decrease in the real income at the time of quitting[9].
In Inventory models we deal with costs that are crisp, that is, fixed and exact. But in realistic situations
these costs are varying over a certain extent of predetermined level. However these uncertainties are due to
fuzziness and in these cases the fuzzy set theory introduced by Zadeh[1] is applicable. In this paper an
attempt is made to obtain a fuzzy model for Wilson’s Paper[4] by adding an appropriate constraint. In 1995,
T.K.Roy and M.Maiti[10] presented an EOQ model with constraint in a fuzzy environment. The model is
solved by fuzzy non-linear programming (FNLP) method using Lagrange multipliers and illustrated with
numerical examples. The solution is compared with solution of the corresponding crisp problem. Also
sensitivity analysis is made on optimum increase in salary in the new job and on optimum quitting time of
the present job for variations in the rate of decrease in real wages following Dutta.D, J.R.Rao and
R.N.Tiwari[3].
As we know that the constant increase in the cost of living is always more than the increase in the salary of
an employee, which in turn causes a decrease in his real income. In this situation, it is quite common that an
employee thinks of quitting the present job and switching over to a new one. An EOQ model is analyzed
using fuzzy set theory, which gives the optimal time for an employee to quit, at a minimum cost.
2. Notations And Terminology
I – Initial real income of an employee in the first year of our discussion (Holding cost)
S – The cost of resetting in the new place (Setup cost)
D – The cost of deficiting at the time of quitting (Shortage cost)
19 | P a g e
www.iiste.org
2. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.1, No.1, 2011
R – Rate of decline of real income per year, defined as a proportion of I
RI – Amount of decrease in real income per year, expressed as a proportion of I
Q – Quantity of salary rise in the new job in a year
Q I – Quantity of salary rise, as a proportion of I, necessary to make it worthwhile to change jobs
Q – Re-order quantity of employees (Re-order quantity of salary)
Q=Q +Q
T – Time in years between changing jobs
B - The upper limit for decrease in salary (real income) at the time of quitting
Terminology:
x 100
∑
Cost of Living Index = ∑
where P , P are the prices of goods in base (year of reference) and current years and Q being the
quantities of goods bought in the base year.
x 100
!"! # $ %
Real Wage =
RIT. If a person changes his job after T years, his salary increases in the new job, Q I, must be atleast RIT.
We assume that the decrease in real income every year is uniform, so that after ‘T’ years it is reduced to
Therefore Q I= RIT implies that T=Q /R (See figure - I)
3. Mathematical Analysis
Min g (X, C )
A crisp non-linear programming problem may be defined as follows:
g ! (X, C! )≤ b!
Subject to:
X ≥ 0,
(3.1)
i = 1,2, … … … . m
where X=(X , X , … … . . X ) is a variable vector. g , g ! ’s are algebraic expressions in X with coefficients
4
5 ≡ (C , C , … … . , C ) and C! = (C! , C! , … … . , C! ) respectively.
:
Min g (X, C )
Introducing fuzziness in the crisp parameters, the system (3.1) in a fuzzy environment is:
< <
g ! (X, C; )≤ =>
Subject to:
X ≥ 0,
(3.2)
i = 1,2, … … … . m
where the wave bar (~) represents the fuzzification of the parameters.
In fuzzy set theory, the fuzzy objective, coefficients and constraints are defined by their membership
20 | P a g e
www.iiste.org
3. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.1, No.1, 2011
functions which may be linear or non-linear. According to Bellman and Zadeh[1], Carlson and
Korhonen[2] and Trappey et.al[11] the above problem can be written as,
Max α
g ! (X, μ (α))≤ μC@ A (α)
Subject to:
A
@
X ≥ 0,
(3.3)
i = 0,1,2, … … … . m
where μ @ (X) and μC@ (X) are the membership functions of fuzzy objective and fuzzy constraints and D is
an additional variable wshich is known as aspiration level.
L(α, X, λ)= α- ∑!K λ! FGg ! (X, μ (α))H − GμC@ A (α)HJ
Therefore, the Lagrangian function is given by
A
@
where L! , (i=0,1,2,……..,m) are Lagrange multipliers.
(3.4)
According to the Kuhn-Tucker[8] necessary conditions, the optimal values
(X ∗ ,X ∗ ,……X ∗ ,λ ∗ ,λ ∗ ,……..λ ∗
,D ∗ )
should satisfy
= 0,
N
NOP
= 0,
N
NQ
λ! Gg! (X, μ (α)) − μC@ A (α)H=0
(3.5)
A
@
g ! (X, μ @
A (α)) − μC@ A (α) ≤ 0
λ! ≤ 0, i = 0,1,2, … … . m, j=1,2,3…….n
and
The Kuhn – Tucker sufficient conditions demand that the objective function (for maximization) and
constraints should be respectively concave and convex.
Solving equation (3.5), the optimal solution for the fuzzy non-linear programming problem is obtained.
4.EOQ Model where Fuzzy Goal, Costs are Represented by Linear Membership Functions
In a crisp EOQ Model, the problem is to choose the order level Q(>0) and Q=Q +Q which minimizes
the average total cost C(Q) per unit time. That is
MinC(Q)= I R T+ DR T + S R T
S S
S W
(4.1)
Q I≤B
Subject to:
Q >0
The fuzzy version of the square root rule for employment change model with one constraint is written as
follows:
Maximize α
μ A (α) R T+ μ (α) R T + μZ A (α) R T ≤ μ (α)
S S
Subject to:
A S W A
Q I ≤ μ[ (α)
(4.2)
A
Q > 0 , D ∈ [0,1]
where μ (x), μ (x), μZ (x), μ (x)and μ[ (X) are linear membership functions for real income, deficit in
cost, setup cost, objective function and upper bound for decrease in real wage respectively.
21 | P a g e
www.iiste.org
4. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.1, No.1, 2011
Assume that the linear membership functions are defined as follows:
0 bcd x < I − I
a I − x
μ (x) = 1−f g bcd I − I ≤ x ≤ I
` I
_ 1 bcd x > I
0 bcd x < D − D
a
D−x
μ (x) = 1 − f g bcd D − D ≤ x ≤ D
` D
_1 bcd x > D
0 bcd x < S − S
a
S − x
μZ (x) = 1−f g bcd S − S ≤ x ≤ S
` S
_ 1 bcd x > S
0 bcd x > C + C
a
x−C
μ (x) = 1 − f g bcd C ≤ x ≤ C + C
` C
_1 bcd x < C
0 bcd x > B + B
μ[ (x) = h 1 − R [ T bcd B ≤ x ≤ B + B
%A[
1 bcd x < B
------- (4.3)
Here I , D , S , C and B , are the maximal violations of the aspiration levels of I, D, S, C and B (or the
permissible ranges). From the nature of parameters defined, it is observed that the setup cost, deficit cost
and the real income are non-decreasing and the objective function and upper limit for decrease in real
μ A (α) = I − (1 − α)I ; μ (α) = D − (1 − α)D ;
income are non-increasing. Hence we have
A
μZ A (α) = S − (1 − α)S ; μ A (α) = C + (1 − α)C
μ[ (α) = B + (1 − α)B ------(4.4)
and
A
The Lagrangian equation takes the form as
L(α,Q ,Q , L , L ) = α – L k (I − (1 − α)I ) R T + (D − (1 − α)D ) R T + (S − (1 −
S S
S
+ S + S
α)S ) R T − (C + (1 − α)C )l − L [Q I − (B + (1 − α)B )]
W
+ S
--------- (4.5 )
By Kuhn-Tucker’s necessary conditions, we have
= 0 ⇒ 1− L I − L D − L S R T − L C − L B = 0
S S
N S W
NQ ( + S) ( + S) + S
--------(4.6)
= 0 ⇒ L k (I − (1 − α)I ) − (D − (1 − α)D ) − (S − (1 − α)S ) R Tl +
S
N ( S) S W
N ( + S )S ( + S )S ( + S )S
+
L I=0 --------
= 0 ⇒
N
(4.7)
N
L k (I − (1 − α)I ) − (D − (1 − α)D ) − (S − (1 − α)S ) R Tl + L I = 0
( S) S
S
W
( S) ( S) ( S)
+
+ S + S + S
-------- (4.8)
22 | P a g e
www.iiste.org
5. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.1, No.1, 2011
∂L 1 Q 1 Q
= 0 ⇒ (I − (1 − α)I ) p q + (D − (1 − α)D ) p q+
∂L 2 Q +Q 2 Q +Q
(S − (1 − α)S ) R T − (C + (1 − α)C ) = 0
W
+ S
∂L
------------- (4.9)
= 0 ⇒ IQ − (B + (1 − α)B ) = 0
∂L
⇒ IQ = (B + (1 − α)B ) -------------
(4.10)
Q =
([r( AQ∗ )[ )
Solving the equations, the expression for optimum quantity of salary rise is
∗
------------
(4.11)
Q =k lQ
A( AQ)
and the optimum Re-order level is
∗ ∗
A( AQ)
------------
where α∗ is a root of
(4.12)
(I − (1 − α)I )(B + (1 − α)B ) + (D − (1 − α)D )Q I + 2(S − (1 − α)S )RI − 2(C +
(1 − α)C )Q I = 0 [I − (1 − α)I ][B + (1 − α)B ] + [D − (1 − α)D ] Q I + 2[S − (1 − α)S ]RI −
2[C + (1 − α)C ][B + (1 − α)B ]I = 0
which is a cubic equation in (1 − α).
-------------(4.13)
Solving this equation we obtain α and hence Q ∗ , Q ∗
, I ∗ , D∗ , S ∗ , B ∗ , T ∗ , and C ∗ , the optimal values
which lie within the tolerance limit of fuzzy range.
When Q = 0, Equation (4.13) reduces to
Remark: 4.1.1
[I − (1 − α)I ][B + (1 − α)B ] + 2[S − (1 − α)S ] RI − 2[C + (1 − α)C ][B + (1 − α)B ]I = 0
(4.14)
Equation (4.14) is cubic equation in (1 − α) and the equation can be written as
which is the equation derived in [7]
IB (1 − α) − uB I − 2BB I − 2IC Bv(1 − α) − (2BB I − I B − 2RI S − 2IB C − 2II B)(1 −
α) − (IB + 2RI S − 2ICB) = 0 (4.15)
which is the equation derived in [6]
Example:
Let us assume that a person is possessed with the following fuzzy costs and goals
I = 50000, I = 13000 ; D = 11500, D = 5000 ;
S = 20000, S = 5000 ; B = 10000, B = 1400 ;
C=6000, C = 500 ;
Substituting these values in equation (4.13), we obtain the following optimal values α = 1 (the
R=0.05
Q = 0.2, T ∗ = 4, Q = 0.87 and hence I ∗ = 50000, S ∗ = 20000, D∗ = 11500, B ∗ = 10000,
maximum value)
∗ ∗
C ∗ = 6000 which are original values of I, S, D, B and C. These are the best optimal values. We can ever
23 | P a g e
www.iiste.org
6. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.1, No.1, 2011
get for this problem as the aspiration level α takes the maximum value (α = 1). From this result we
conclude that the person can leave this present job after 4 years provided that the increase in salary in the
new job should be atleast 20% and the Re-order level of salary (Re-order level of employees) for a certain
company should be atleast 87% of I, the initial income. With variations in R, the same problem can also be
solved.
Comparison of the crisp model with the fuzzy model for various combinations of extreme limits of I,
S, D, C and B for R=0.05 in the above example is given in the following table.
With R=0.055, it is found by the fuzzy model that the aspiration level α to be 0.8419 and the optimal
Analysis on Comparison Table – 1
cost being Rs. 6016 with the optimal time of quitting as 3.9 years with new salary rise being 21% and the
Re-order level in salary (Re-order level of employees)being 95%. But with different crisp parameter
These three expenditure values are greater than the fuzzy C ∗ value. Hence to get optimal solution for this
combinations, only three values (1,2 and 6) fall within the permissible expenditure range (6000-6500).
of parameters had to be worked out to get this optimal value C ∗ = 6016 , which is a cumbersome work,
problem by crisp model, the above eight different combinations are not sufficient, some more combinations
where as the fuzzy model simplifies the work of getting the optimal value. This is one of the advantages of
fuzzy applications in day today life problems.
5. Sensitivity Analysis
Consider the sensitivity analysis [6, 7] on EOQ’s and other costs with the variations in the tolerance
limit of total cost C which is shown in the following table.
Analysis on Comparison Table – 2
In table – 2, we study the effect of variations on R. As the cost of living index changes from place
to place, the rate of decrease in real income also varies from place to place. For the people dwelling in
I , D , S , C and B , we observe from the above table that as R increases, α decreases to zero. Further
different places with same I, D, S, C and B along with the same permissible variations in
Q ∗ and Q ∗ increases slightly. Also we observe that I ∗ , D∗ , S ∗ and T ∗ decreases and C ∗ and B ∗ increases
within the permissible range as R increases.
Tables similar to Table – 2 can also be constructed for variations in I , D , S , C and B
individually and the corresponding changes in α , Q ∗ , Q ∗ , T ∗ and other values can be studied.
6. Conclusion
In this paper, we have seen a real life inventory problem faced by an employee in connection with his
change of jobs, which could be solved easily by applying fuzzy inventory model. Sensitivity analysis is
presented. This model can be extended to inventory problems under several constraints also.
References
[1] Bellman R.E, Zadeh L.A., ( 1970) Decision making in a fuzzy environment, Management Sciences,
Volume 17, pp.141 – 164.
[2] Carlson,C., and Korhonen,P., (1986) A parametric approach to fuzzy linear programming, Fuzzy sets
and systems, Vol.20, pp 17-30.
[3] Dutta.D, J.R.Rao and R.N.Tiwari, (1993) Effect of tolerance in fuzzy linear fractional programming,
Fuzzy sets and systems, 55, pp 133-142.
[4] Glenn.T.Wilson, (1982) The square root rule for employment change, Operational Research Society,
33, pp 1041-1043.
[5] Hamdy A.Taha, (1988) Operations Research- An Introduction Sixth edition, prentice Hall of India
private limited, New Delhi.
24 | P a g e
www.iiste.org
7. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.1, No.1, 2011
[6] Kadambavanam.K and Punniakrishnan.K. (2011) “Fuzzy Inventory Model Allowing Shortages and
Storage Constraint” International Journal of Production Technology and Management Research to be
published.
[7] Kadambavanam.K and Punniakrishnan.K. (2011) “Inventory Model In a Fuzzy Environment With
Exponential Membership Cost Functions” Ciit International Journals to be published.
[8] Kuhn.H.W and Tucker.A.W. (1951) Non-linear programming. In.J.Neyman (ed), proceedings second
Berkely symposium on mathematical statistics and probability, University of california pracil, pp.481-498.
[9] Rajalakshmi Rajagopal and M.Jeeva, (2001) Fuzzy Inventory Model in Man Power Planning,
proceedings of the National conference on mathematical and computational models, December 27-28,PSG
college of Technology, Coimbatore, India.
[10] Roy.T.K. and Maiti.M., (1995) A fuzzy inventory model with constraint. Opsearch, Volume 32, No.4,
pp.287-298.
[11] Trappey, J.F.C.,Liu, C.R. and Chang, T.C., (1988) Fuzzy non-linear programming. Theory and
applications in manufacturing, Int. J. Production Research, Vol 26, No.5, 975-985.
[12] Zimmermann.H.J., (1996) Fuzzy set theory and its applications. Allied publishers Limited, New Delhi
in association with Kluwar Academic publishers 2nd Revised Edition, Boston.
25 | P a g e
www.iiste.org
8. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.1, No.1, 2011
Author Biography
K.Punniakrishnan,
Associate Professor In Mathematics,
Sri Vasavi College, Erode-638316. Tamilnadu.
The author was born on 3rd June 1955 at Kalipatti, Erode District. He did his post-graduate studies at
Madras University, Madras in 1980. In 1987 he completed his M.Phil degree in the Bharathiar University,
Coimbatore. In 1994 he completed his M.Ed degree in Madras University, Madras. His specialized area in
M.Phil is Functional Analysis. At present he is doing part time Ph.D., degree in the Bharathiar University,
Coimbatore. His area of research is Fuzzy Inventory Backorder models.
Now the author is working at Sri Vasavi College, Erode, affiliated to Bharathiar University, Coimbatore. He
is having 30 years of teaching experience both in U.G and P.G level and 15 years of research experience.
So, far he has produced 40 M.Phil candidates. Now his broad field of research is analyzing optimization
techniques in a Fuzzy Environment. His research articles are to be published in many International and
National Journals.
He is a member of
(i) Board of studies (UG) in Autonomous Colleges,
(ii) Panel of Resource Persons, Annamalai University, Annamalai Nagar and IGNOU.
Dr.K.Kadambavanam,
Associate Professor In Mathematics,
Sri Vasavi College, Erode-638316. Tamilnadu.
The author was born on 27th September 1956 at Palani. He did his post-graduate studies at Annamalai
University, Annamalai Nagar in 1979. In 1980 he completed his M.Phil degree in the same University. His
specialized area in M.Phil is Stochastic Processes. In 2006, Ph.D., degree was awarded to him in the
Bharathiar University, Coimbatore. His area of research is Fuzzy Queueing models.
Now the author is working at Sri Vasavi College, Erode, affiliated to Bharathiar University, Coimbatore. He
is having 31 years of teaching experience both in U.G and P.G level and 15 years of research experience.
So, far he has produced 15 M.Phil candidates and 1 Ph.D., scholar. Now his broad field of research is
analyzing optimization techniques in a Fuzzy Environment. In 2005, he has availed project on fuzzy
inventory models, supported by University Grants Commission, Hyderabad. He organizes a National Level
Seminar on ‘Recent Trends in the application of Mathematics and Statistics with reference to Physical and
Social Sciences’ sponsored by University Grants Commission, Hyderabad at Sri Vasavi College, Erode. His
research articles are published in many International and National Journals, and in edited book volumes.
He is a member of
(i) Board of studies (PG) in Bharathiar University, Coimbatore,
(ii) Kerala Mathematical Association,
(iii)Panel of Resource Persons, Annamalai University, Annamalai Nagar,
(iv) Doctorial Committee for the Ph.D., program, Gandhigram Rural University Gandigram
26 | P a g e
www.iiste.org
9. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.1, No.1, 2011
Figure I
Figure – I shows the decrease in real income till the time of quitting, that is, T years
Figure - II
Figure - III
Figure – II represents the membership function for real wage, deficit in cost or setup cost.
Figure – III represents the membership function for objective function or upper limit for decrease in real
income.
Table – 1
Comparision Table (R = 0.055)
27 | P a g e
www.iiste.org
10. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.1, No.1, 2011
Upper Optimal
Shortage Optimal Optimal Average
S. Real Setup Limit for increase Aspiration
Model (Deficit) Re-order Time of Total
No Income cost decrease in in new Level
Cost Level Quitting Cost
Salary Salary
I D S B Q1 * Q2 * T* C* }
Crisp
1 50000 11500 20000 10000 0.20 0.87 3.6 6028 -
Model
Crisp
2 50000 11500 20000 10500 0.21 0.91 3.8 6229 -
Model
Crisp
3 50000 10000 15000 10000 0.20 1.00 3.6 5688 -
Model
Crisp
4 50000 10000 15000 10500 0.21 1.05 3.8 5905 -
Model
Crisp
5 40000 11500 20000 10000 0.25 0.87 4.5 5983 -
Model
Crisp
6 40000 11500 20000 10500 0.26 0.91 4.8 6186 -
Model
Crisp
7 40000 10000 15000 10000 0.25 1.00 4.5 5660 -
Model
Crisp
8 40000 10000 15000 10500 0.26 1.05 4.8 5879 -
Model
Fuzzy
9 47945 10710 19210 10221 0.21 0.95 3.9 6016 0.8419
Model
28 | P a g e
www.iiste.org