The document describes a fuzzy portfolio optimization model using trapezoidal possibility distributions to account for uncertainty in asset returns. The model formulates the portfolio selection problem as a mathematical optimization that maximizes expected return minus risk. Lagrange multipliers and Karush-Kuhn-Tucker conditions are used to derive the optimal solution. Real stock market data is used to provide a numerical example.
Fuzzy relations, fuzzy graphs, and the extension principle are three important concepts in fuzzy logic. Fuzzy relations generalize classical relations to allow partial membership and describe relationships between objects to varying degrees. Fuzzy graphs describe functional mappings between input and output linguistic variables. The extension principle provides a procedure to extend functions defined on crisp domains to fuzzy domains by mapping fuzzy sets through functions. These concepts form the foundation of fuzzy rules and fuzzy arithmetic.
The document discusses the extension principle for generalizing crisp mathematical concepts to fuzzy sets. It defines the extension principle for mappings from cartesian products to universes. An example is provided to illustrate defining a fuzzy set in the output universe based on fuzzy sets in the input universes and the mapping between them. Fuzzy numbers are defined to have specific properties including being a normal fuzzy set, closed intervals for membership levels, and bounded support. Positive and negative fuzzy numbers are distinguished based on their membership functions. Binary operations are classified as increasing or decreasing, and it is noted the extension principle can be used to define the fuzzy result of applying increasing or decreasing operations to fuzzy inputs. Notation for fuzzy number algebraic operations is introduced. Several theore
COMPARISON OF DIFFERENT APPROXIMATIONS OF FUZZY NUMBERSijfls
The notions of interval approximations of fuzzy numbers and trapezoidal approximations of fuzzy numbers have been discussed. Comparisons have been made between the close-interval approximation, valueambiguity
interval approximation and distinct approximation with the corresponding crisp and trapezoidal fuzzy numbers. A numerical example is included to justify the above mentioned notions.
Bid and Ask Prices Tailored to Traders' Risk Aversion and Gain Propension: a ...Waqas Tariq
Risky asset bid and ask prices “tailored” to the risk-aversion and the gain-propension of the traders are set up. They are calculated through the principle of the Extended Gini premium, a standard method used in non-life insurance. Explicit formulae for the most common stochastic distributions of risky returns, are calculated. Sufficient and necessary conditions for successful trading are also discussed.
Fuzzy logic is a form of logic that deals with reasoning that is approximate rather than fixed and exact. It was introduced in 1965 with the proposal of fuzzy set theory by Lotfi Zadeh. Fuzzy logic uses fuzzy sets and membership functions to deal with imprecise or uncertain inputs and allows for reasoning that allows for partial truth of inputs between fully true and fully false. Fuzzy controllers combine fuzzy logic with control theory to control complex systems. They involve fuzzification of inputs, applying fuzzy rules through inference, and defuzzification of outputs to obtain a crisp control action.
This document discusses the mathematical similarities between call/put option pricing in derivatives trading and the newsvendor problem in supply chain optimization.
The key points are:
1) Call/put option pricing and the newsvendor problem can both be formulated as expectations of "hockey stick" payoff functions, with the newsvendor problem equivalent to optimizing a portfolio of calls and puts.
2) Under certain assumptions like a Gaussian distribution, the formulas for call/put prices and newsvendor costs are analogous and involve concepts like delta hedging.
3) In both cases, when the strike/supply is optimized, the cost becomes insensitive to changes in the underlying stock price/expected demand,
The document discusses fuzzy logic and fuzzy sets. It begins by explaining fuzzy logic is used to model imprecise concepts and dependencies using natural language terms. It then defines fuzzy variables, universes of discourse, and fuzzy sets which have membership functions assigning a degree of membership between 0 and 1. Operations on fuzzy sets like intersection, union, and complement are also covered. The document also discusses fuzzy rules, relations, and approximate reasoning using max-min inference.
Fuzzy relations, fuzzy graphs, and the extension principle are three important concepts in fuzzy logic. Fuzzy relations generalize classical relations to allow partial membership and describe relationships between objects to varying degrees. Fuzzy graphs describe functional mappings between input and output linguistic variables. The extension principle provides a procedure to extend functions defined on crisp domains to fuzzy domains by mapping fuzzy sets through functions. These concepts form the foundation of fuzzy rules and fuzzy arithmetic.
The document discusses the extension principle for generalizing crisp mathematical concepts to fuzzy sets. It defines the extension principle for mappings from cartesian products to universes. An example is provided to illustrate defining a fuzzy set in the output universe based on fuzzy sets in the input universes and the mapping between them. Fuzzy numbers are defined to have specific properties including being a normal fuzzy set, closed intervals for membership levels, and bounded support. Positive and negative fuzzy numbers are distinguished based on their membership functions. Binary operations are classified as increasing or decreasing, and it is noted the extension principle can be used to define the fuzzy result of applying increasing or decreasing operations to fuzzy inputs. Notation for fuzzy number algebraic operations is introduced. Several theore
COMPARISON OF DIFFERENT APPROXIMATIONS OF FUZZY NUMBERSijfls
The notions of interval approximations of fuzzy numbers and trapezoidal approximations of fuzzy numbers have been discussed. Comparisons have been made between the close-interval approximation, valueambiguity
interval approximation and distinct approximation with the corresponding crisp and trapezoidal fuzzy numbers. A numerical example is included to justify the above mentioned notions.
Bid and Ask Prices Tailored to Traders' Risk Aversion and Gain Propension: a ...Waqas Tariq
Risky asset bid and ask prices “tailored” to the risk-aversion and the gain-propension of the traders are set up. They are calculated through the principle of the Extended Gini premium, a standard method used in non-life insurance. Explicit formulae for the most common stochastic distributions of risky returns, are calculated. Sufficient and necessary conditions for successful trading are also discussed.
Fuzzy logic is a form of logic that deals with reasoning that is approximate rather than fixed and exact. It was introduced in 1965 with the proposal of fuzzy set theory by Lotfi Zadeh. Fuzzy logic uses fuzzy sets and membership functions to deal with imprecise or uncertain inputs and allows for reasoning that allows for partial truth of inputs between fully true and fully false. Fuzzy controllers combine fuzzy logic with control theory to control complex systems. They involve fuzzification of inputs, applying fuzzy rules through inference, and defuzzification of outputs to obtain a crisp control action.
This document discusses the mathematical similarities between call/put option pricing in derivatives trading and the newsvendor problem in supply chain optimization.
The key points are:
1) Call/put option pricing and the newsvendor problem can both be formulated as expectations of "hockey stick" payoff functions, with the newsvendor problem equivalent to optimizing a portfolio of calls and puts.
2) Under certain assumptions like a Gaussian distribution, the formulas for call/put prices and newsvendor costs are analogous and involve concepts like delta hedging.
3) In both cases, when the strike/supply is optimized, the cost becomes insensitive to changes in the underlying stock price/expected demand,
The document discusses fuzzy logic and fuzzy sets. It begins by explaining fuzzy logic is used to model imprecise concepts and dependencies using natural language terms. It then defines fuzzy variables, universes of discourse, and fuzzy sets which have membership functions assigning a degree of membership between 0 and 1. Operations on fuzzy sets like intersection, union, and complement are also covered. The document also discusses fuzzy rules, relations, and approximate reasoning using max-min inference.
Hypersoft set is an extension of the soft set where there is more than one set of attributes occur and it is very much helpful in multi-criteria group decision making problem. In a hypersoft set, the function F is a multi-argument function. In this paper, we have used the notion of Fuzzy Hypersoft Set (FHSS), which is a combination of fuzzy set and hypersoft set. In earlier research works the concept of Fuzzy Soft Set (FSS) was introduced and it was applied successfully in various fields. The FHSS theory gives more flexibility as compared to FSS to tackle the parameterized problems of uncertainty. To overcome the issue where FSS failed to explain uncertainty and incompleteness there is a dire need for another environment which is known as FHSS. It works well when there is more complexity involved in the parametric data i.e the data that involves vague concepts. This work includes some basic set-theoretic operations on FHSSs and for the reliability and the authenticity of these operations, we have shown its application with the help of a suitable example. This example shows that how FHSS theory plays its role to solve real decision-making problems.
This document provides an overview of fuzzy logic and fuzzy set theory with examples from image processing. Some key points:
- Fuzzy set theory was coined by Lofti Zadeh in 1965 and allows for degrees of membership rather than binary true/false values. Almost all real-world classes are fuzzy.
- Fuzzy logic handles imprecise concepts like "tall person" through membership functions and handles inferences through generalized modus ponens.
- Fuzzy logic has been applied to fields like image processing, where concepts like "light blue" are fuzzy, and speech recognition by assigning fuzzy values to phonemes.
- Techniques discussed include fuzzy membership functions, aggregation operations, alpha cuts, linguistic
Zadeh conceptualized the theory of fuzzy set to provide a tool for the basis of the theory of possibility. Atanassov extended this theory with the introduction of intuitionistic fuzzy set. Smarandache introduced the concept of refined intuitionistic fuzzy set by further subdivision of membership and non-membership value. The meagerness regarding the allocation of a single membership and non-membership value to any object under consideration is addressed with this novel refinement. In this study, this novel idea is utilized to characterize the essential elements e.g. subset, equal set, null set, and complement set, for refined intuitionistic fuzzy set. Moreover, their basic set theoretic operations like union, intersection, extended intersection, restricted union, restricted intersection, and restricted difference, are conceptualized. Furthermore, some basic laws are also discussed with the help of an illustrative example in each case for vivid understanding.
Fuzzy logic provides a means of calculating intermediate values between absolute true and absolute false. It allows partial set membership and handles imprecise data. Fuzzy logic systems use membership functions to determine the degree to which inputs belong to sets and fuzzy inference systems to map inputs to outputs. Fuzzy logic has applications in devices like washing machines and cameras that require handling imprecise variables.
This document presents a method for solving an assignment problem where the costs are triangular intuitionistic fuzzy numbers rather than certain values. It introduces the concepts of intuitionistic fuzzy sets and triangular intuitionistic fuzzy numbers, and defines operations and a ranking method for comparing them. The paper formulates the intuitionistic fuzzy assignment problem mathematically as an optimization problem that minimizes the total intuitionistic fuzzy cost while satisfying constraints that each job is assigned to exactly one machine. It describes using an intuitionistic fuzzy Hungarian method to solve this type of assignment problem.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The document defines and discusses properties of interval-valued Pythagorean fuzzy ideals in semigroups. Some key points:
- It defines interval-valued Pythagorean fuzzy sub-semigroup, left/right ideals, bi-ideals, and interior ideals of a semigroup.
- An example is provided to illustrate an interval-valued Pythagorean fuzzy sub-semigroup.
- Properties proved include the intersection of two interval-valued Pythagorean fuzzy sub-semigroups/left ideals is also an interval-valued Pythagorean fuzzy sub-semigroup/left ideal.
This document discusses Sugeno-style fuzzy inference and linear curve fitting using least squares. It explains how Sugeno inference uses output membership functions that represent exact values, like singletons in Mamdani, allowing an exact numeric output. It provides an example of fitting input-output data with a Sugeno model using triangular and constant membership functions. The document also describes using linear and nonlinear functions as consequences in Sugeno rules. It concludes with an example of using least squares optimization to find the parameters for a linear model that best fits some given input-output data.
Fuzzy logic was introduced by Lotfi Zadeh in 1965 to address problems with classical logic being too precise. Fuzzy logic allows for truth values between 0 and 1 rather than binary true/false. It involves fuzzy sets, membership functions, linguistic variables, and fuzzy rules. Fuzzy logic can be applied to knowledge representation and inference using concepts like fuzzy predicates, relations, modifiers and quantifiers. It has various applications including household appliances, animation, industrial automation, and more.
This document defines and explains key concepts in fuzzy set theory, including fuzzy complements, unions, and intersections. It begins with an introduction to fuzzy sets as a generalization of classical sets that allows for gradual membership rather than binary membership. Membership functions assign elements a value between 0 and 1 indicating their degree of belonging to a set. The document then provides definitions and properties of fuzzy complements, unions, intersections, and other related concepts. It concludes with examples of applications of fuzzy set theory such as traffic monitoring systems, appliance controls, and medical diagnosis.
In this paper we introduce the notions of Fuzzy Ideals in BH-algebras and the notion
of fuzzy dot Ideals of BH-algebras and investigate some of their results.
This document discusses the classical normal linear regression model (CNLRM) and its assumptions. It states that under the CNLRM, the error terms ui are assumed to be independently and normally distributed with mean 0 and variance σ2. This normality assumption allows us to derive the probability distributions of the OLS estimators βˆ1, βˆ2, and σˆ2. Specifically, it is stated that βˆ1 and βˆ2 are normally distributed with means equal to the true parameter values β1 and β2, and variance inversely proportional to sample size. Ratios of βˆ1 - β1 and βˆ2 - β2 to their standard errors follow standard normal distributions. σˆ2 is chi-
This document provides an introduction to fuzzy logic and fuzzy sets. It discusses key concepts such as fuzzy sets having degrees of membership between 0 and 1 rather than binary membership, and fuzzy logic allowing for varying degrees of truth. Examples are given of fuzzy sets representing partially full tumblers and desirable cities to live in. Characteristics of fuzzy sets such as support, crossover points, and logical operations like union and intersection are defined. Applications mentioned include vehicle control systems and appliance control using fuzzy logic to handle imprecise and ambiguous inputs.
An enhanced fuzzy rough set based clustering algorithm for categorical dataeSAT Journals
Abstract In today’s world everything is done digitally and so we have lots of data raw data. This data are useful to predict future events if we proper use it. Clustering is such a technique where we put closely related data together. Furthermore we have types of data sequential, interval, categorical etc. In this paper we have shown what is the problem with clustering categorical data with rough set and who we can overcome with improvement.
A NEW APPROACH FOR RANKING OF OCTAGONAL INTUITIONISTIC FUZZY NUMBERSijfls
In this paper we introduce Octagonal Intuitionistic fuzzy numbers with its membership and nonmembership functions. A new method is proposed for finding an optimal solution for intuitionistic fuzzy transportation problem, in which the costs are octagonal intuitionistic fuzzy numbers. The procedure is
illustrated with a numerical example.
An enhanced fuzzy rough set based clustering algorithm for categorical dataeSAT Publishing House
This document summarizes a research paper that proposes an enhanced fuzzy rough set-based clustering algorithm for categorical data. The paper discusses problems with using traditional rough set theory to cluster categorical data when there are no crisp relations between attributes. It proposes using fuzzy logic to assign weights to attribute values and calculate lower approximations based on the similarity between sets, in order to cluster categorical data when crisp relations do not exist. The proposed method is described through an example comparing traditional rough set clustering to the new fuzzy rough set approach.
Denoising Process Based on Arbitrarily Shaped WindowsCSCJournals
Many factors, such as moving objects, introduce noise in digital images. The presence of noise affects image quality. The image denoising process works on reconstructing a noiseless image and improving its quality. When an image has an additive white Gaussian noise (AWGN) then denoising becomes a challenging process. In our research, we present an improved algorithm for image denoising in the wavelet domain. Homogenous regions for an input image are estimated using a region merging algorithm. The local variance and wavelet shrinkage algorithm are applied to denoise each image patch. Experimental results based on peak signal to noise ratio (PSNR) measurements showed that our algorithm provided better results compared with a denoising algorithm based on a minimum mean square error (MMSE) estimator.
Algorithms for generalized mean variance problemsguest5fe40a
This document discusses portfolio theory and algorithms for solving mean/variance optimization problems with restrictions. It begins by summarizing the fundamentals of portfolio theory, including the mean/variance principle derived from utility theory. It then describes different algorithms - the Markowitz algorithm, generalized Markowitz algorithm, and extended Markowitz algorithm - that can solve portfolio optimization problems subject to various equality and inequality restrictions. Numerical examples are provided to illustrate the algorithms. The goal is to help practitioners implement portfolio strategies based on theoretical concepts.
This document summarizes an abstract for a thesis that analyzed the effects of road infrastructure on gross domestic product in Indonesia using unbalanced panel data from 2009 to 2012 for 33 provinces. The analysis used a two-way error component model to account for error influenced by observations and time. Variance components of the error were estimated using Minimum Variance Quadratic Unbiased Estimation (MIVQUE) and regression parameters were estimated using Maximum Likelihood Estimation (MLE). MIVQUE obtained variance error components and MLE obtained the best fitting regression model relating road infrastructure variables to GDP. The analysis showed it is important to first estimate error variance components before estimating parameters for unbalanced panel data regression models.
Este resumen describe los resultados de un análisis de frecuencias de la variable SEXO en un conjunto de datos con 21 casos. El resumen incluye estadísticos como la desviación estándar, varianza, rango, mínimo, máximo, media, mediana y moda. El análisis encontró que 10 casos (47.6%) eran H y 10 casos (47.6%) eran M, con 1 caso restante (4.8%).
Financial Benchmarking Of Transportation Companies In The New York Stock Exc...ertekg
Download Link > https://ertekprojects.com/gurdal-ertek-publications/blog/financial-benchmarking-of-transportation-companies-in-the-new-york-stock-exchange-nyse-through-data-envelopment-analysis-dea-and-visualization/
In this paper, we present a benchmarking study of industrial transportation companies traded in the New York Stock Exchange (NYSE). There are two distinguishing aspects of our study: First, instead of using operational data for the input and the output items of the developed Data Envelopment Analysis (DEA) model, we use financial data of the companies that are readily available on the Internet. Secondly, we visualize the efficiency scores of the companies in relation to the subsectors and the number of employees. These visualizations enable us to discover interesting insights about the companies within each subsector, and about subsectors in comparison to each other. The visualization approach that we employ can be used in any DEA study that contains subgroups within a group. Thus, our paper also contains a methodological contribution.
Hypersoft set is an extension of the soft set where there is more than one set of attributes occur and it is very much helpful in multi-criteria group decision making problem. In a hypersoft set, the function F is a multi-argument function. In this paper, we have used the notion of Fuzzy Hypersoft Set (FHSS), which is a combination of fuzzy set and hypersoft set. In earlier research works the concept of Fuzzy Soft Set (FSS) was introduced and it was applied successfully in various fields. The FHSS theory gives more flexibility as compared to FSS to tackle the parameterized problems of uncertainty. To overcome the issue where FSS failed to explain uncertainty and incompleteness there is a dire need for another environment which is known as FHSS. It works well when there is more complexity involved in the parametric data i.e the data that involves vague concepts. This work includes some basic set-theoretic operations on FHSSs and for the reliability and the authenticity of these operations, we have shown its application with the help of a suitable example. This example shows that how FHSS theory plays its role to solve real decision-making problems.
This document provides an overview of fuzzy logic and fuzzy set theory with examples from image processing. Some key points:
- Fuzzy set theory was coined by Lofti Zadeh in 1965 and allows for degrees of membership rather than binary true/false values. Almost all real-world classes are fuzzy.
- Fuzzy logic handles imprecise concepts like "tall person" through membership functions and handles inferences through generalized modus ponens.
- Fuzzy logic has been applied to fields like image processing, where concepts like "light blue" are fuzzy, and speech recognition by assigning fuzzy values to phonemes.
- Techniques discussed include fuzzy membership functions, aggregation operations, alpha cuts, linguistic
Zadeh conceptualized the theory of fuzzy set to provide a tool for the basis of the theory of possibility. Atanassov extended this theory with the introduction of intuitionistic fuzzy set. Smarandache introduced the concept of refined intuitionistic fuzzy set by further subdivision of membership and non-membership value. The meagerness regarding the allocation of a single membership and non-membership value to any object under consideration is addressed with this novel refinement. In this study, this novel idea is utilized to characterize the essential elements e.g. subset, equal set, null set, and complement set, for refined intuitionistic fuzzy set. Moreover, their basic set theoretic operations like union, intersection, extended intersection, restricted union, restricted intersection, and restricted difference, are conceptualized. Furthermore, some basic laws are also discussed with the help of an illustrative example in each case for vivid understanding.
Fuzzy logic provides a means of calculating intermediate values between absolute true and absolute false. It allows partial set membership and handles imprecise data. Fuzzy logic systems use membership functions to determine the degree to which inputs belong to sets and fuzzy inference systems to map inputs to outputs. Fuzzy logic has applications in devices like washing machines and cameras that require handling imprecise variables.
This document presents a method for solving an assignment problem where the costs are triangular intuitionistic fuzzy numbers rather than certain values. It introduces the concepts of intuitionistic fuzzy sets and triangular intuitionistic fuzzy numbers, and defines operations and a ranking method for comparing them. The paper formulates the intuitionistic fuzzy assignment problem mathematically as an optimization problem that minimizes the total intuitionistic fuzzy cost while satisfying constraints that each job is assigned to exactly one machine. It describes using an intuitionistic fuzzy Hungarian method to solve this type of assignment problem.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The document defines and discusses properties of interval-valued Pythagorean fuzzy ideals in semigroups. Some key points:
- It defines interval-valued Pythagorean fuzzy sub-semigroup, left/right ideals, bi-ideals, and interior ideals of a semigroup.
- An example is provided to illustrate an interval-valued Pythagorean fuzzy sub-semigroup.
- Properties proved include the intersection of two interval-valued Pythagorean fuzzy sub-semigroups/left ideals is also an interval-valued Pythagorean fuzzy sub-semigroup/left ideal.
This document discusses Sugeno-style fuzzy inference and linear curve fitting using least squares. It explains how Sugeno inference uses output membership functions that represent exact values, like singletons in Mamdani, allowing an exact numeric output. It provides an example of fitting input-output data with a Sugeno model using triangular and constant membership functions. The document also describes using linear and nonlinear functions as consequences in Sugeno rules. It concludes with an example of using least squares optimization to find the parameters for a linear model that best fits some given input-output data.
Fuzzy logic was introduced by Lotfi Zadeh in 1965 to address problems with classical logic being too precise. Fuzzy logic allows for truth values between 0 and 1 rather than binary true/false. It involves fuzzy sets, membership functions, linguistic variables, and fuzzy rules. Fuzzy logic can be applied to knowledge representation and inference using concepts like fuzzy predicates, relations, modifiers and quantifiers. It has various applications including household appliances, animation, industrial automation, and more.
This document defines and explains key concepts in fuzzy set theory, including fuzzy complements, unions, and intersections. It begins with an introduction to fuzzy sets as a generalization of classical sets that allows for gradual membership rather than binary membership. Membership functions assign elements a value between 0 and 1 indicating their degree of belonging to a set. The document then provides definitions and properties of fuzzy complements, unions, intersections, and other related concepts. It concludes with examples of applications of fuzzy set theory such as traffic monitoring systems, appliance controls, and medical diagnosis.
In this paper we introduce the notions of Fuzzy Ideals in BH-algebras and the notion
of fuzzy dot Ideals of BH-algebras and investigate some of their results.
This document discusses the classical normal linear regression model (CNLRM) and its assumptions. It states that under the CNLRM, the error terms ui are assumed to be independently and normally distributed with mean 0 and variance σ2. This normality assumption allows us to derive the probability distributions of the OLS estimators βˆ1, βˆ2, and σˆ2. Specifically, it is stated that βˆ1 and βˆ2 are normally distributed with means equal to the true parameter values β1 and β2, and variance inversely proportional to sample size. Ratios of βˆ1 - β1 and βˆ2 - β2 to their standard errors follow standard normal distributions. σˆ2 is chi-
This document provides an introduction to fuzzy logic and fuzzy sets. It discusses key concepts such as fuzzy sets having degrees of membership between 0 and 1 rather than binary membership, and fuzzy logic allowing for varying degrees of truth. Examples are given of fuzzy sets representing partially full tumblers and desirable cities to live in. Characteristics of fuzzy sets such as support, crossover points, and logical operations like union and intersection are defined. Applications mentioned include vehicle control systems and appliance control using fuzzy logic to handle imprecise and ambiguous inputs.
An enhanced fuzzy rough set based clustering algorithm for categorical dataeSAT Journals
Abstract In today’s world everything is done digitally and so we have lots of data raw data. This data are useful to predict future events if we proper use it. Clustering is such a technique where we put closely related data together. Furthermore we have types of data sequential, interval, categorical etc. In this paper we have shown what is the problem with clustering categorical data with rough set and who we can overcome with improvement.
A NEW APPROACH FOR RANKING OF OCTAGONAL INTUITIONISTIC FUZZY NUMBERSijfls
In this paper we introduce Octagonal Intuitionistic fuzzy numbers with its membership and nonmembership functions. A new method is proposed for finding an optimal solution for intuitionistic fuzzy transportation problem, in which the costs are octagonal intuitionistic fuzzy numbers. The procedure is
illustrated with a numerical example.
An enhanced fuzzy rough set based clustering algorithm for categorical dataeSAT Publishing House
This document summarizes a research paper that proposes an enhanced fuzzy rough set-based clustering algorithm for categorical data. The paper discusses problems with using traditional rough set theory to cluster categorical data when there are no crisp relations between attributes. It proposes using fuzzy logic to assign weights to attribute values and calculate lower approximations based on the similarity between sets, in order to cluster categorical data when crisp relations do not exist. The proposed method is described through an example comparing traditional rough set clustering to the new fuzzy rough set approach.
Denoising Process Based on Arbitrarily Shaped WindowsCSCJournals
Many factors, such as moving objects, introduce noise in digital images. The presence of noise affects image quality. The image denoising process works on reconstructing a noiseless image and improving its quality. When an image has an additive white Gaussian noise (AWGN) then denoising becomes a challenging process. In our research, we present an improved algorithm for image denoising in the wavelet domain. Homogenous regions for an input image are estimated using a region merging algorithm. The local variance and wavelet shrinkage algorithm are applied to denoise each image patch. Experimental results based on peak signal to noise ratio (PSNR) measurements showed that our algorithm provided better results compared with a denoising algorithm based on a minimum mean square error (MMSE) estimator.
Algorithms for generalized mean variance problemsguest5fe40a
This document discusses portfolio theory and algorithms for solving mean/variance optimization problems with restrictions. It begins by summarizing the fundamentals of portfolio theory, including the mean/variance principle derived from utility theory. It then describes different algorithms - the Markowitz algorithm, generalized Markowitz algorithm, and extended Markowitz algorithm - that can solve portfolio optimization problems subject to various equality and inequality restrictions. Numerical examples are provided to illustrate the algorithms. The goal is to help practitioners implement portfolio strategies based on theoretical concepts.
This document summarizes an abstract for a thesis that analyzed the effects of road infrastructure on gross domestic product in Indonesia using unbalanced panel data from 2009 to 2012 for 33 provinces. The analysis used a two-way error component model to account for error influenced by observations and time. Variance components of the error were estimated using Minimum Variance Quadratic Unbiased Estimation (MIVQUE) and regression parameters were estimated using Maximum Likelihood Estimation (MLE). MIVQUE obtained variance error components and MLE obtained the best fitting regression model relating road infrastructure variables to GDP. The analysis showed it is important to first estimate error variance components before estimating parameters for unbalanced panel data regression models.
Este resumen describe los resultados de un análisis de frecuencias de la variable SEXO en un conjunto de datos con 21 casos. El resumen incluye estadísticos como la desviación estándar, varianza, rango, mínimo, máximo, media, mediana y moda. El análisis encontró que 10 casos (47.6%) eran H y 10 casos (47.6%) eran M, con 1 caso restante (4.8%).
Financial Benchmarking Of Transportation Companies In The New York Stock Exc...ertekg
Download Link > https://ertekprojects.com/gurdal-ertek-publications/blog/financial-benchmarking-of-transportation-companies-in-the-new-york-stock-exchange-nyse-through-data-envelopment-analysis-dea-and-visualization/
In this paper, we present a benchmarking study of industrial transportation companies traded in the New York Stock Exchange (NYSE). There are two distinguishing aspects of our study: First, instead of using operational data for the input and the output items of the developed Data Envelopment Analysis (DEA) model, we use financial data of the companies that are readily available on the Internet. Secondly, we visualize the efficiency scores of the companies in relation to the subsectors and the number of employees. These visualizations enable us to discover interesting insights about the companies within each subsector, and about subsectors in comparison to each other. The visualization approach that we employ can be used in any DEA study that contains subgroups within a group. Thus, our paper also contains a methodological contribution.
Online Multi-Person Tracking Using Variance Magnitude of Image colors and Sol...Pourya Jafarzadeh
The document describes a multi-object tracking method that formulates tracking as a Short Minimum Clique Problem (SMCP). It uses three consecutive frames divided into three clusters, where each clique between clusters represents a tracklet (partial trajectory) of a person. Edges between clusters are weighted based on color histogram similarity and eigenvalue similarity of bounding boxes. Occlusion handling is performed by saving color histograms of occluded people in a buffer and comparing them to newly detected people. The method was evaluated on challenging datasets and shown to achieve promising results compared to state-of-the-art methods.
Google regains US search market share for the first time since the Yahoo/Bing alliance. Facebook ad CPCs increase 54% in Q3 due to rising competition. Facebook ad spend increases 25% as advertisers see value in social media marketing. Tablets capture 77% of retail mobile ad spend in September 2011, with mobile spend projected to be 7-10% of total paid search spend by end of Q4 2011. UK search spend and ROI both increase 16% and 18% respectively in Q3.
This document discusses portfolio optimization using the tracking model method. It defines various types of investment risk that investors and financial institutions face, such as interest rate risk, business risk, credit risk, inflation risk, and reinvestment risk. It then examines various risk measures used in portfolio optimization models, including variance, mean absolute deviation, value at risk (VaR), and conditional value at risk (CVaR). The results section finds that using the tracking model and provided data, the portfolio is only feasible for a risk lover investor, as it invests entirely in the single best performing asset.
This document discusses estimating covariance matrices for portfolio selection. It introduces a shrinkage estimator that is an optimally weighted average of the sample covariance matrix and single-index covariance matrix. The empirical part compares these estimators to determine which produces the most efficient portfolio with smallest return variability. The sample covariance matrix has problems when the number of assets is large, as it has high variance and its inverse is a poor estimator. Shrinkage aims to improve upon the sample covariance matrix by combining it with a factor model-based estimator.
This document describes a portfolio optimization project. It analyzes historical stock return data for Apple and Netflix to construct an efficient frontier. A risk-free rate is calculated from treasury bill returns. An optimal risky portfolio is determined by maximizing the Sharpe ratio. Based on a risk aversion index of 1, the appropriate weights in the optimal portfolio and risk-free asset are calculated to maximize utility. Graphs of the efficient frontier, capital allocation line, and indifference curve illustrate the optimal portfolio selection.
The document is a scanned receipt from a grocery store purchase on June 15th, 2022 totaling $58.37. It lists items bought including ground beef, chicken breasts, tortillas, cheese, and produce such as tomatoes, lettuce, and onions. The receipt shows the item prices, taxes, and total amount due.
This document is a research project submitted by Nduati Michelle Wanjiku in partial fulfillment of the requirements for a Bachelor's degree in financial economics from Strathmore University in Nairobi, Kenya. The research project compares the relative performance of single-index models and multifactor models in determining the optimal portfolio allocation through the efficient frontier. It establishes that the single index model outperforms the multifactor model as it yields higher Sharpe ratios. This is attributed to the single index model containing characteristics of macroeconomic variables. The research uses historical factor betas between 2001 and 2012 to minimize risk and maximize returns in constructing the efficient frontier.
Portfolio diversification reduces risk by including various investments that are not perfectly correlated. The standard deviation is commonly used to quantify risk and measure how concentrated or diversified a portfolio is. Modern portfolio theory holds that investors can construct an efficient portfolio that optimizes the risk-return tradeoff by balancing different assets. Mathematical tools like the variance-covariance matrix and Lagrange multipliers can be used to calculate the minimum-variance or optimal portfolio given expected returns, variances and correlations of constituent assets.
Efficient Frontier Searching of Fixed Income Portfolio under CROSSSun Zhi
This document describes a framework for constructing efficient frontiers for fixed income portfolios under China's CROSS (China Risk Oriented Solvency System) regulatory framework. The framework uses quadratic programming to optimize portfolios to meet expected yield targets while staying within regulatory capital limits. A simulation case examines efficient frontiers with and without duration constraints. It finds that holding long-duration corporate bonds to maturity uses less regulatory capital than trading them. The framework allows insurance firms to maximize returns within capital limits by providing optimal asset allocations.
This document provides a report on a portfolio optimization project. It summarizes the construction, weekly performance, and rebalancing of a portfolio formed using Markowitz's modern portfolio theory. Over the course of a month, the portfolio was initially constructed using 20 stocks and was rebalanced weekly based on updated stock prices. The portfolio achieved a return of 4.58%, outperforming the S&P 500 benchmark. A risk analysis of the portfolio returns was also conducted using measures like the Sharpe ratio, Treynor ratio, and Sortino ratio.
This document is a portfolio optimization project report submitted by Tingwen Zhou and Xuan Ning to Professor Marcel Y. Blais on December 15, 2016. It analyzes the performance of a portfolio reconstructed on November 7, 2016 using a 3-year period of asset data. The portfolio underperformed, losing a total of $86,025. Various metrics are calculated to evaluate the portfolio such as Sharpe ratio, Treynor ratio, and maximum drawdown. The efficient frontier is analyzed over time as weights were rebalanced weekly.
The document summarizes the capital asset pricing model (CAPM) and reviews early empirical tests of the model. It begins by outlining the logic and key assumptions of the CAPM, including that the market portfolio must be mean-variance efficient. However, empirical tests found problems with the CAPM's predictions about the relationship between expected returns and market betas. Specifically, cross-sectional regressions did not find intercepts equal to the risk-free rate or slopes equal to the expected market premium. To address measurement error, later tests examined portfolios rather than individual assets. In general, the early empirical evidence revealed shortcomings in the CAPM's ability to explain returns.
This document summarizes research on approaches to portfolio construction that aim to mitigate tail risk and extreme losses. The researchers find that:
1) For data samples shorter than 30 years, it is not possible to statistically differentiate between portfolios constructed using minimum-variance and minimum-CVaR approaches based on ex-post tail risk metrics alone.
2) Employing a minimum-volatility overlay strategy is potentially a better way to reduce tail risk than explicitly penalizing losses via CVaR, as minimum-volatility portfolios inherently reduce exposure to volatility, the factor with the worst observed tail risk.
3) Adding a tail risk penalty to a mean-variance optimization improves ex-ante tail risk metrics but
1) The study examines the economic importance of accounting information by analyzing how accounting data from financial statements can improve portfolio optimization for US equities.
2) Using a parametric portfolio policy method, the researchers modeled portfolio weights as a linear function of three accounting characteristics - accruals, change in earnings, and asset growth - and compared it to weights based on size, book-to-market, and momentum.
3) They found that the accounting-based portfolio generated an out-of-sample annual information ratio of 1.9 compared to 1.5 for the price-based portfolio, indicating accounting information provides valuable signals for optimizing equity investments.
- The document summarizes key concepts from chapters 1.1 to 1.6 of the book "Pattern Recognition and Machine Learning" by Christopher M. Bishop.
- It introduces polynomial curve fitting, Bayesian curve fitting, decision theory, and information theory concepts such as entropy, Kullback-Leibler divergence, and their applications in machine learning.
- Key algorithms covered include linear and polynomial regression, maximum likelihood estimation, and using entropy and KL divergence to model probability distributions.
This document discusses using bootstrap methods to create confidence intervals for time series forecasts. It provides examples of time series data and introduces the AR(1) model. The document describes an algorithm for calculating a bootstrap confidence interval for forecasting from an AR(1) model. It then discusses a simulation study comparing empirical coverage rates of bootstrap confidence intervals under different parameters. Finally, it applies the bootstrap method to forecasting Gross National Product growth, comparing the results to a parametric approach.
This document discusses using bootstrap methods to create confidence intervals for time series forecasts. It provides background on time series models and the autoregressive (AR) process. It then presents an algorithm for calculating a bootstrap confidence interval for forecasts from an AR(1) model. A simulation study compares coverage rates for bootstrap confidence intervals under different parameters. Finally, the method is applied to US Gross National Product data to forecast and construct confidence intervals.
The document proposes applying robust techniques like support vector clustering to portfolio optimization models to address uncertainties. It outlines constructing a robust semi-mean absolute deviation optimization model that uses support vector clustering to simulate an uncertainty set capturing uncertain asset returns from historical data. The methodology involves collecting market data, cleaning the data, training and testing the robust portfolio optimization model on different datasets and analyzing the results to capture uncertainties better than fixed uncertainty sets.
Application of Graphic LASSO in Portfolio Optimization_Yixuan Chen & Mengxi J...Mengxi Jiang
- The document describes using graphical lasso to estimate the precision matrix of stock returns and apply portfolio optimization.
- Graphical lasso estimates the precision matrix instead of the covariance matrix to allow for sparsity. This makes the estimation more efficient for large datasets.
- The study uses 8 different models to simulate stock return data and compares the performance of graphical lasso, sample covariance, and shrinkage estimators on portfolio optimization of in-sample and out-of-sample test data. Graphical lasso performed best on out-of-sample test data, showing it can generate portfolios that generalize well.
The document discusses various mathematical concepts related to functions and graphs including:
1) Transformations of graphs such as translations, reflections, and rotations. It also discusses parent functions and their derivatives.
2) Examples of graphing functions after applying transformations to translate, scale, or reflect the original graphs. Equations are provided for the transformed graphs.
3) Theorems related to how statistics of data change after translations or scale changes. For example, the mean, median and mode change proportionally but variance, standard deviation, and range change in specific ways.
4) Concepts involving inverse functions, including using the horizontal line test to determine if an inverse is a function and notations for inverse functions
1. The document describes replicating the payoff of an exotic option with a portfolio using continuous time mathematics. The payoff depends on two underlying prices, S1 and S2.
2. By applying the Fundamental Theorem of Calculus and treating the function as fixed in one variable, an expression is derived for the payoff in terms of S1. This is then repeated for S2 to obtain an expression for the full two-dimensional payoff function.
3. The final expression for the replicated payoff function involves terms depending on the first and second derivatives of the function with respect to S1 and S2, as well as integral terms involving the second derivatives. This allows construction of a replicating
Many Decision Problems in business and social systems can be modeled using mathematical optimization, which seeks to maximize or minimize some objective which is a function of the decisions.
Stochastic Optimization Problems are mathematical programs where some of the data incorporated into the objective or constraints are Uncertain.
whereas, Deterministic Optimization Problems are formulated with known parameters.
The document discusses the Fundamental Theorem of Calculus, which has two parts. Part 1 establishes the relationship between differentiation and integration, showing that the derivative of an antiderivative is the integrand. Part 2 allows evaluation of a definite integral by evaluating the antiderivative at the bounds. Examples are given of using both parts to evaluate definite integrals. The theorem unified differentiation and integration and was fundamental to the development of calculus.
Intro to Quant Trading Strategies (Lecture 8 of 10)Adrian Aley
This document provides an introduction to performance measures for algorithmic trading strategies, focusing on Sharpe ratio and Omega. It outlines some limitations of Sharpe ratio, such as ignoring likelihoods of winning and losing trades. Omega is introduced as a measure that considers all moments of a return distribution by taking the ratio of expected gains to expected losses. Sharpe-Omega is proposed as a combined measure that retains the intuitiveness of Sharpe ratio while using put option price to better measure risk, incorporating higher moments. The document concludes with a discussion of portfolio optimization using Omega.
Symbolic Computation via Gröbner BasisIJERA Editor
The purpose of this paper is to find the orthogonal projection of a rational parametric curve onto a rational parametric surface in 3-space. We show that the orthogonal projection problem can be reduced to the problem of finding elimination ideals via Gröbnerbasis. We provide a computational algorithm to find the orthogonal projection, and include a few illustrative examples. The presented method is effective and potentially useful for many applications related to the design of surfaces and other industrial and research fields.
A GENERALIZED SAMPLING THEOREM OVER GALOIS FIELD DOMAINS FOR EXPERIMENTAL DESIGNcscpconf
In this paper, the sampling theorem for bandlimited functions over
domains is
generalized to one over ∏
domains. The generalized theorem is applicable to the
experimental design model in which each factor has a different number of levels and enables us
to estimate the parameters in the model by using Fourier transforms. Moreover, the relationship
between the proposed sampling theorem and orthogonal arrays is also provided.
A Generalized Sampling Theorem Over Galois Field Domains for Experimental Des...csandit
In this paper, the sampling theorem for bandlimited functions over
domains is
generalized to one over ∏
domains. The generalized theorem is applicable to the
experimental design model in which each factor has a different number of levels and enables us
to estimate the parameters in the model by using Fourier transforms. Moreover, the relationship
between the proposed sampling theorem and orthogonal arrays is also provided.
KEY
Optimal Prediction of the Expected Value of Assets Under Fractal Scaling Expo...mathsjournal
In this paper, the optimal prediction of the expected value of assets under the fractal scaling exponent is considered. We first obtain a fractal exponent, then derive a seemingly Black-Scholes parabolic equation. We further obtain its solutions under given conditions for the prediction of expected value of assets given the fractal exponent.
A PROBABILISTIC ALGORITHM OF COMPUTING THE POLYNOMIAL GREATEST COMMON DIVISOR...ijscmcj
In the earlier work, subresultant algorithm was proposed to decrease the coefficient growth in the Euclidean algorithm of polynomials. However, the output polynomial remainders may have a small factor which can be removed to satisfy our needs. Then later, an improved subresultant algorithm was given by representing the subresultant algorithm in another way, where we add a variant called 𝜏 to express the small factor. There was a way to compute the variant proposed by Brown, who worked at IBM. Nevertheless, the way failed to determine each𝜏 correctly.
The document discusses using the Nelder-Mead search algorithm to optimize parameters in the Fuzzy BEXA machine learning algorithm. Specifically, it aims to optimize parameters related to converting data files, defining membership functions, and setting threshold cutoffs, to maximize classification accuracy. The author developed a Java program to optimize two threshold parameters (αa and αc) using Nelder-Mead to search the parameter space and call Fuzzy BEXA to evaluate classification accuracy as the objective function. While Nelder-Mead works well for this optimization, initial parameter guesses can impact finding the true global optimum.
The document defines and discusses random variables. It begins by defining a random variable as a function that assigns a real number to each outcome of a random experiment. It then discusses the conditions for a function to be considered a random variable. The document outlines the key types of random variables as discrete, continuous, and mixed and introduces the cumulative distribution function (CDF) and probability density function (PDF) as ways to describe the distribution of a random variable. It provides examples of CDFs and PDFs for discrete random variables and discusses properties of distribution and density functions. The document also introduces important continuous random variables like the Gaussian random variable.
A note on estimation of population mean in sample survey using auxiliary info...Alexander Decker
1. The document proposes a class of estimators for estimating the population mean in two-phase sampling using auxiliary information.
2. Some common estimators like the ratio, product, and regression estimators are special cases within the proposed class. Expressions for bias and mean squared error of the estimators are obtained up to the first order of approximation.
3. Asymptotically optimum estimators are identified that have minimum mean squared error. The proposed class of estimators is found to perform better than usual ratio and other estimators for population mean estimation.
This document summarizes research on encoding Reiter's solution to the frame problem in modal logic. Specifically, it presents a modal logic counterpart to Reiter's regression technique. The paper introduces a version of deterministic PDL with quantification over actions and equality. It then describes how Reiter's approach can be encoded in this logic by representing action preconditions, possible causes of state changes, and successor state axioms that enable regression. The paper claims this provides a way to perform reasoning about actions using a modal logic framework with computational advantages over the Situation Calculus.
Similar to Fuzzy portfolio optimization_Yuxiang Ou (20)
1. Fuzzy Portfolio Optimization
Using Carlsson-Fullér-Majlender’s Trapezoidal Possibility
Model
Yuxiang Ou
Abstract
Within the framework of Carlsson-Fullér-Majlender’s Trapezoidal Possibility Model, we
apply Lagrange Multiplier Method and Karush-Kuhn-Tucker Conditions to derive the
optimal solution to fuzzy portfolio selection problem.
Keywords: Porfolio selection; Trapezoidal fuzzy variables; Lagrange method; KKT
conditions
Introduction
The portfolio selection problem concerns how to form an optimal portfolio, that is, how to
decide on the weights of every asset to generate the highest level of investor’s utility. Modern
portfolio analysis was pioneered by [7] Markowitz in 1952. As is known to us, people are
risk-averse return-seekers, which means we all want to maximize the return and minimize the
risk of the investment. There is a balance we need to figure out. The most essential point
Markowitz made in the article is that we can use the expected rate of return (mean) to model
the return and the variance of the rate to represent the risk. Markowitz then derived the
optimal choice with the belief that the investors have complete information, which, in most
instances, is not true. To account for uncertainty, it is better to utilize fuzzy set theory by [9]
Zadeh (1965) and [10] Bellman and Zadeh (1970). Many studies on the portfolio selection
problem using various fuzzy formulations emerge. In terms of membership function of the
fuzzy variable, researchers have proposed linear function, tangent type function, interval
linear function, exponential function, inverse tangent function, etc. to address the problem.
2. 2
This paper is typically based on [1] Carlsson et al (2002), which presumes that the
membership function of fuzzy return is in a trapezoidal form. The model is basically the same
and some expressions might be identical. The novelty of this paper is that we reorganize
the argument and provide some proofs that are omitted in the original paper. Also, we use
real stock market data to give a numerical illustration instead of artificially assigning some
values to the model. That is why we get a different result from the original paper.
Furthermore, we develop some critical thinking of the Carlsson-Fullér-Majlender’s model
and state our concerns.
The rest of the paper goes as follows. In Section 2, we mention some preliminaries on the
related issues. In Section 3, we describe the optimization problem in different ways, apply
Lagrange method to deal with it and employ Karush-Kuhn-Tucker conditions to confirm the
minimizer. We list the generalized algorithm in Section 4 and apply it to realistic situations in
Section 5. The paper will cover some personal suggestions in Section 6 and conclude with
Section 7.
Preliminaries
2.1 Utility theory of portfolio investment
A utility function is viewed as a means of ranking portfolios. Higher utility values are
assigned to portfolios with more attractive risk-return profiles. Based on this rule, we can
design a function as follows:
𝑈 𝑃 = 𝐸 𝑟& − 0.005×𝐴×𝜎.
(𝑟&)
where A is an index of the investor’s risk aversion (𝐴 ≈ 2.46 for an average investor in the
USA), 𝑟& is the rate of return on the portfolio and 𝐸 𝑟& and 𝜎.
(𝑟&) represent its mean value
and variance, respectively. The scaling factor of 0.005 allows us to express the expected
return and variance as percentages rather than decimals.
Note that the sign of 𝐸 𝑟& is positive while that of 𝜎.
(𝑟&) is negative, this utility function is
consistent with reality. Moreover, this utility function prevents us from dealing with
complicated multiobjective optimization problems.
3. 3
2.2 Probability or possibility approach
Investors make decisions on portfolio selection according to their knowledge and anticipation
of capital market, budget constraints and available options. Due to limited or incomplete
information one can gather from the market, there exists uncertainty among the decision-
making process that we need to address.
Probability theory is the standard approach to this issue, with the belief that uncertainty is
equated with randomness. Nevertheless, this is not exactly true. Subjective judgement makes
a huge difference in decision-making but it seems difficult to incorporate it into the
probability theory. The assignment of the probabilities would also be problematic when we
demand a higher precision and more decimal places.
Alternatively, in this paper we will assume that the rates of return on assets are modeled by
possibility distributions. That is, the rate of return on the 𝑖th asset will be represented by a
fuzzy number 𝑟8, and 𝑟8 𝑡 , 𝑡𝜖ℛ will be interpreted as the degree of possibility of the
statement that “𝑡 will be the rate of return on the 𝑖th asset”, which is also named as
membership function. In our method, we will consider only trapezoidal possibility
distributions.
2.3 Trapezoidal fuzzy variable
2.3.1 Membership function
The definition of trapezoidal fuzzy variable is based on the membership function.
Definition. A fuzzy number A is called trapezoidal with tolerance interval [𝑎, 𝑏], left width 𝛼
and right width 𝛽 if its membership function has the following form:
𝐴 𝑡 =
1 −
𝑎 − 𝑡
𝛼
𝑖𝑓 𝑎 − 𝛼 ≤ 𝑡 ≤ 𝑎,
1 𝑖𝑓 𝑎 ≤ 𝑡 ≤ 𝑏,
1 −
𝑡 − 𝑏
𝛽
𝑖𝑓 𝑎 ≤ 𝑡 ≤ 𝑏 + 𝛽,
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
and we denote A by 𝐴 = (𝑎, 𝑏, 𝛼, 𝛽).
This membership function can be visualized as
4. 4
2.3.2 𝜸-level set
A 𝛾-level set of a fuzzy variable is composed of all the possibilities with the grade of
membership higher than 𝛾. Then we can modify Fig. 1 to get a closer look at the issue.
Proposition 1. Let 𝐴 = (𝑎, 𝑏, 𝛼, 𝛽) be a trapezoidal fuzzy variable and [𝐴]O
= [𝑎P 𝛾 , 𝑎.(𝛾)]
be the corresponding 𝛾-level set, then [𝐴]O
= 𝑎P 𝛾 , 𝑎. 𝛾 = 𝑎 − 1 − 𝛾 𝛼, 𝑏 +
1 − 𝛾 𝛽 , ∀𝛾𝜖[0,1].
Proof. It can be easily to check that this proposition holds for 𝛾𝜖{0,1}. Let’s focus on
situations where 0 < γ < 1. From Fig. 2, we observe that 𝛾-level line intersects A’s
membership function at two points, i.e. 𝑎P 𝛾 and 𝑎. 𝛾 . Therefore, we need to derive these
two points here.
For 𝑎P 𝛾 , let 1 −
VWX
Y
= 𝛾. We can get 𝑎P 𝛾 = 𝑡 = 𝑎 − (1 − 𝛾)𝛼;
For 𝑎. 𝛾 , let 1 −
XWZ
[
= 𝛾. We can get 𝑎. 𝛾 = 𝑡 = 𝑏 + (1 − 𝛾)𝛽.
Thus, [𝐴]O
= 𝑎P 𝛾 , 𝑎. 𝛾 = 𝑎 − 1 − 𝛾 𝛼, 𝑏 + 1 − 𝛾 𝛽 .
¢
5. 5
2.3.3 Possibilistic mean
The crisp possibilistic mean value of fuzzy variable A with [𝐴]O
= [𝑎P 𝛾 , 𝑎.(𝛾)] is defined
as
𝐸 𝐴 = 𝛾 𝑎P 𝛾 + 𝑎.(𝛾) 𝑑𝛾
P
]
(1)
Proposition 2. Let A be a trapezoidal fuzzy variable denoted as 𝐴 = (𝑎, 𝑏, 𝛼, 𝛽), then.
𝐸 𝐴 =
𝑎 + 𝑏
2
+
𝛽 − 𝛼
6
(2)
Proof. According to the definition, we can calculate possibilistic mean of trapezoidal fuzzy
variable as follows:
𝐸 𝐴 = 𝛾 𝑎P 𝛾 + 𝑎. 𝛾 𝑑𝛾
P
]
= 𝛾 𝑎 − 1 − 𝛾 𝛼 + 𝑏 + 1 − 𝛾 𝛽 𝑑𝛾
P
]
= 𝛾 𝛼 − 𝛽 𝛾 + 𝑎 + 𝑏 + 𝛽 − 𝛼 𝑑𝛾
P
]
= 𝛼 − 𝛽 𝛾.
𝑑𝛾
P
]
+ 𝑎 + 𝑏 + 𝛽 − 𝛼 𝛾𝑑𝛾
P
]
=
𝛼 − 𝛽
3
+
𝑎 + 𝑏 + 𝛽 − 𝛼
2
=
𝑎 + 𝑏
2
+
𝛽 − 𝛼
2
−
𝛽 − 𝛼
3
=
𝑎 + 𝑏
2
+
𝛽 − 𝛼
6
¢
2.3.4 Possibilistic variance
The crisp possibilistic mean value of fuzzy variable A with [𝐴]O
= [𝑎P 𝛾 , 𝑎.(𝛾)] is defined
as
𝜎.
𝐴 =
1
2
𝛾 𝑎. 𝛾 − 𝑎P(𝛾) .
𝑑𝛾
P
]
(3)
Proposition 3. Let A be a trapezoidal fuzzy variable denoted as 𝐴 = (𝑎, 𝑏, 𝛼, 𝛽), then.
7. 7
𝑟8: the rate of return on security 𝑖;
𝑟&: the rate of return on the portfolio.
Then we know that 𝑟& = 𝑟8 𝑥8
e
8fP and 𝑥8
e
8fP = 1. As we do not consider short-selling and
long-buying, we also have 0 ≤ 𝑥8 ≤ 1.
Accordingly, our portfolio selection problem is equivalent to the following mathematical
programming problem:
max
jk
𝑈 𝑃 = 𝐸 𝑟8 𝑥8
e
8fP
− 0.005×𝐴×𝜎.
𝑟8 𝑥8
e
8fP
s. t. { 𝑥8
e
8fP
= 1, 𝑥8 ≥ 0 , 𝑖 = 1,2, … , 𝑛}
(5)
Where 𝑟8 = 𝑎8, 𝑏8, 𝛼8, 𝛽8 , 𝑖 = 1,2, … , 𝑛 are fuzzy variables of trapezoidal form.
3.2 Translations of the optimization problem
Note that in Section 2.3.3 and 2.3.4 we have derived that the possibilistic mean and variance
of a trapezoidal fuzzy variable 𝐴 = (𝑎, 𝑏, 𝛼, 𝛽) are 𝐸 𝐴 =
VnZ
.
+
[WY
o
and 𝜎.
𝐴 =
(
ZWV
.
+
Yn[
o
).
+
Yn[ p
q.
, respectively.
Then for trapezoidal fuzzy number 𝑟8 = 𝑎8, 𝑏8, 𝛼8, 𝛽8 , 𝑖 = 1,2, … , 𝑛, we have
𝐸 𝑟8 =
VknZk
.
+
[kWYk
o
=
P
.
[𝑎8 + 𝑏8 +
P
`
(𝛽8 − 𝛼8)], thus,
𝐸 𝑟8 𝑥8
e
8fP = 𝑥8 𝐸(𝑟8)e
8fP =
P
.
[𝑎8 + 𝑏8 +
P
`
(𝛽8 − 𝛼8)]𝑥8
e
8fP (6)
And since
𝜎.
𝑟8 = (
ZkWVk
.
+
Ykn[k
o
).
+
Ykn[k
p
q.
= (
P
.
𝑏8 − 𝑎8 +
P
`
𝛼8 + 𝛽8 ).
+
Ykn[k
p
q.
, when we
ignore the covariance between rate of returns on different securities, we have
𝜎.
( 𝑟8 𝑥8
e
8fP ) = (
P
.
𝑏8 − 𝑎8 +
P
`
𝛼8 + 𝛽8
e
8fP 𝑥8).
+
P
q.
[ 𝛼8 + 𝛽8 𝑥8
e
8fP ].
(7)
If we introduce the notations as:
𝑢8 =
P
.
[𝑎8 + 𝑏8 +
P
`
(𝛽8 − 𝛼8)],
𝑣8 =
].]]tu
.
𝑏8 − 𝑎8 +
P
`
𝛼8 + 𝛽8 ,
𝑤8 =
].]]tu
q.
(𝛼8 + 𝛽8), then
𝐸 𝑟8 𝑥8
e
8fP =
P
.
[𝑎8 + 𝑏8 +
P
`
(𝛽8 − 𝛼8)]𝑥8
e
8fP = 𝑢8 𝑥8
e
8fP ,
8. 8
𝜎.
𝑟8 𝑥8
e
8fP
=
1
2
𝑏8 − 𝑎8 +
1
3
𝛼8 + 𝛽8
e
8fP
𝑥8
.
+
1
72
𝛼8 + 𝛽8 𝑥8
e
8fP
.
=
1
0.005𝐴
𝑣8
e
8fP
𝑥8
.
+
1
72
72
0.005𝐴
𝑤8 𝑥8
e
8fP
.
=
1
0.005𝐴
𝑣8
e
8fP
𝑥8
.
+
1
0.005𝐴
𝑤8
e
8fP
𝑥8
.
thus,
𝑈 𝑃 = 𝐸 𝑟8 𝑥8
e
8fP
− 0.005×𝐴×𝜎.
𝑟8 𝑥8
e
8fP
= 𝑢8 𝑥8
e
8fP
− 0.005𝐴×
1
0.005𝐴
𝑣8
e
8fP
𝑥8
.
+
1
0.005𝐴
𝑤8
e
8fP
𝑥8
.
= 𝑢8 𝑥8
e
8fP
− 𝑣8
e
8fP
𝑥8
.
− 𝑤8
e
8fP
𝑥8
.
and the optimization problem becomes
max
jk
𝑈 𝑃 = 𝑢8 𝑥8
e
8fP
− 𝑣8
e
8fP
𝑥8
.
− 𝑤8
e
8fP
𝑥8
.
s. t. { 𝑥8
e
8fP
= 1, 𝑥8 ≥ 0 , 𝑖 = 1,2, … , 𝑛}
(8)
Here the 𝑖th asset is represented by a triplet (𝑣8, 𝑤8, 𝑢8), where 𝑢8 denotes its possibilistic
expected value, and 𝑣8
.
+ 𝑤8
.
denotes its possibilistic variance multiplied by the constant
0.005×𝐴.
The convex hull of { 𝑣8, 𝑤8, 𝑢8 : 𝑖 = 1,2, … , 𝑛}, denoted by 𝑇, and defined by
𝑇 = 𝑐𝑜𝑛𝑣 𝑣8, 𝑤8, 𝑢8 : 𝑖 = 1,2, … , 𝑛
= { 𝑣8
e
8fP
𝑥8, 𝑤8
e
8fP
𝑥8, 𝑢8 𝑥8
e
8fP
: 𝑥8
e
8fP
= 1, 𝑥8 ≥ 0 , 𝑖 = 1,2, … , 𝑛}
is a convex polyhedron in ℛ`
. We can move to any point in the polytope by varying the value
of 𝑥8. In other words, let 𝑣] = 𝑣8
e
8fP 𝑥8, 𝑤] = 𝑤8
e
8fP 𝑥8, and 𝑢] = 𝑢8
e
8fP 𝑥8, we need to
find the point within the polytope generating the highest value of 𝑢] − 𝑣]
.
− 𝑤]
.
. Then
problem (8) turns into the following three-dimensional non-linear programming problem:
max
yz,{z,|z
𝑈 𝑃 = 𝑢] − 𝑣]
.
− 𝑤]
.
s. t. 𝑣], 𝑤], 𝑢] 𝜖 𝑇
(9)
Or, equivalently,
min
yz,{z,|z
𝑈 𝑃 = 𝑣]
.
+ 𝑤]
.
− 𝑢]
s. t. 𝑣], 𝑤], 𝑢] 𝜖 𝑇
(10)
9. 9
Note that 𝑇 is a compact and convex subset of ℛ`
, and the implicit function
𝑔€ 𝑣], 𝑤] = 𝑣]
.
+ 𝑤]
.
− 𝑐
is strictly convex for any 𝑐 𝜖 ℛ. This means that any optimal solution to (10) must be on the
boundary of 𝑇. As 𝑇 is a polyhedron of ℛ`
and the optimal solution must be on the boundary
of 𝑇, then any optimal solution can be obtained as a convex combination of at most 3 extreme
points of 𝑇. [1] Carlsson, Fullér and Majlender (2002) presented an algorithm for finding
such an optimal solution. In the algorithm, one should calculate: (i) the (exact) solutions to all
conceivable 3-asset problems with non-collinear assets, (ii) the (exact) solutions to all
conceivable 2-asset problems with distinguishable assets, and (iii) the utility value of each
asset. Then one can compare the utility values of all feasible solutions and portfolios with the
highest utility value will be chosen as optimal solutions to the portfolio selection problem.
3.3 Optimal solutions
3.3.1 3-asset problems
Consider three noncollinear assets 𝑣8, 𝑤8, 𝑢8 , i = 1,2,3,
Proposition 4. For any noncollinear assets 𝑣8, 𝑤8, 𝑢8 , i = 1,2,3, ∄ 𝛼P, 𝛼. 𝜖ℛ.
, 𝛼P, 𝛼. ≠ 0,
such that
𝛼P
𝑣P
𝑤P
𝑢P
+ 𝛼.
𝑣.
𝑤.
𝑢.
− 𝛼P + 𝛼.
𝑣`
𝑤`
𝑢`
= 0.
Proof. Suppose there exists 𝛼P, 𝛼. 𝜖ℛ.
, 𝛼P, 𝛼. ≠ 0, such that
𝛼P
𝑣P
𝑤P
𝑢P
+ 𝛼.
𝑣.
𝑤.
𝑢.
− 𝛼P + 𝛼.
𝑣`
𝑤`
𝑢`
= 0, then
𝛼P
𝑣P − 𝑣`
𝑤P − 𝑤`
𝑢P − 𝑢`
+ 𝛼.
𝑣. − 𝑣`
𝑤. − 𝑤`
𝑢. − 𝑢`
= 0, that is, we have
𝑣P − 𝑣`
𝑤P − 𝑤`
𝑢P − 𝑢`
= −
Yp
Yƒ
𝑣. − 𝑣`
𝑤. − 𝑤`
𝑢. − 𝑢`
if 𝛼P ≠ 0;
or
𝑣. − 𝑣`
𝑤. − 𝑤`
𝑢. − 𝑢`
= −
Yƒ
Yp
𝑣P − 𝑣`
𝑤P − 𝑤`
𝑢P − 𝑢`
if 𝛼. ≠ 0.
We find collinearity in both cases, which contradicts our noncollinear assumptions.
¢
15. 15
= 𝑑𝑒𝑡
𝑞P 𝑟P
𝑞. 𝑟.
= 0
As our proof of Proposition 6, this contradicts our noncollinearity assumption of
𝑣8, 𝑤8, 𝑢8 , i = 1,2,3. So 𝑦•
∇j
.
𝐿 𝑥, 𝜆 𝑦 ≠ 0.
Then from inequality (17), we know that 𝑦•
∇j
.
𝐿 𝑥, 𝜆 𝑦 > 0, i.e. 𝐿′′(𝑥, 𝜆) is a positive
definite matrix at 𝑥 = 𝑥∗
, thus 𝑥 = 𝑥∗
is a minimizer of the utility function in problem (11).
¢
3.3.2 2-asset problems
Now consider a 2-asset problem with two assets, denoted as 𝑣P, 𝑤P, 𝑢P and 𝑣., 𝑤., 𝑢. ,
such that 𝑣P, 𝑤P, 𝑢P ≠ 𝑣., 𝑤., 𝑢. . The optimization problem turns into
min
jƒ,jp,
𝑈 𝑃 = (𝑣P 𝑥P + 𝑣. 𝑥.).
+ (𝑤P 𝑥P + 𝑤. 𝑥.).
− (𝑢P 𝑥P + 𝑢. 𝑥.)
s. t. 𝑥P + 𝑥. = 1
(18)
The Lagrange function of this constrained problem is
𝐿 𝑥, 𝜆 = 𝑣P 𝑥P + 𝑣. 𝑥.
.
+ 𝑤P 𝑥P + 𝑤. 𝑥.
.
− 𝑢P 𝑥P + 𝑢. 𝑥. + 𝜆(𝑥P + 𝑥. − 1) (19)
The Karush-Kuhn-Tucker necessity conditions are
2𝑣P 𝑣P 𝑥P + 𝑣. 𝑥. + 2𝑤P 𝑤P 𝑥P + 𝑤. 𝑥. − 𝑢P + 𝜆 = 0
2𝑣. 𝑣P 𝑥P + 𝑣. 𝑥. + 2𝑤. 𝑤P 𝑥P + 𝑤. 𝑥. − 𝑢. + 𝜆 = 0
𝑥P + 𝑥. − 1 = 0
(20)
Subtract the second equation from the first one in (20), we get
2 𝑣P − 𝑣. 𝑣P 𝑥P + 𝑣. 𝑥. + 2 𝑤P − 𝑤. 𝑤P 𝑥P + 𝑤. 𝑥. − (𝑢P − 𝑢.) = 0
and we substitute 𝑥. using the third equation:
2 𝑣P − 𝑣. [𝑣P 𝑥P + 𝑣.(1 − 𝑥P)] + 2 𝑤P − 𝑤. [𝑤P 𝑥P + 𝑤.(1 − 𝑥P)] − (𝑢P − 𝑢.) = 0
2 𝑣P − 𝑣.
.
𝑥P + 𝑤P − 𝑤.
.
𝑥P + 2 𝑣P − 𝑣. 𝑣. + 2 𝑤P − 𝑤. 𝑤. = (𝑢P − 𝑢.)
i.e.
𝑣P − 𝑣.
.
+ 𝑤P − 𝑤.
.
𝑥P =
1
2
𝑢P − 𝑢. − 𝑣P − 𝑣. 𝑣. − 𝑤P − 𝑤. 𝑤. (21)
If 𝑣P − 𝑣.
.
+ 𝑤P − 𝑤.
.
≠ 0 then we find the solution 𝑥∗
= 𝑥P
∗
, 𝑥.
∗
= (𝑥P
∗
, 1 − 𝑥P
∗
)
where
𝑥P
∗
=
1
𝑣P − 𝑣.
. + 𝑤P − 𝑤.
.
×[
1
2
𝑢P − 𝑢. − 𝑣P − 𝑣. 𝑣. − 𝑤P − 𝑤. 𝑤.] (22)
Otherwise, if 𝑣P = 𝑣. and 𝑤P = 𝑤. then from equation (21) we find 𝑢P = 𝑢., which
contradicts the assumption that the two assets are not identical. Therefore, we can always
16. 16
have a candidate solution to the constrained minimizer problem. The only question is whether
this candidate solution minimizes our selection function or not.
Similarly, we take a look at 𝐿′′(𝑥, 𝜆).
Since
𝐿 𝑥, 𝜆 = 𝑣P 𝑥P + 𝑣. 𝑥.
.
+ 𝑤P 𝑥P + 𝑤. 𝑥.
.
− 𝑢P 𝑥P + 𝑢. 𝑥. + 𝜆(𝑥P + 𝑥. − 1) (19)
then
∇j 𝐿 𝑥, 𝜆 =
2𝑣P 𝑣P 𝑥P + 𝑣. 𝑥. + 2𝑤P 𝑤P 𝑥P + 𝑤. 𝑥. − 𝑢P + 𝜆
2𝑣. 𝑣P 𝑥P + 𝑣. 𝑥. + 2𝑤. 𝑤P 𝑥P + 𝑤. 𝑥. − 𝑢. + 𝜆
so
∇j
.
𝐿 𝑥, 𝜆 = 2
𝑣P
.
+ 𝑤P
.
𝑣P 𝑣. + 𝑤P 𝑤.
𝑣P 𝑣. + 𝑤P 𝑤. 𝑣.
.
+ 𝑤.
. = 2(
𝑣P
𝑣.
𝑣P
𝑣.
•
+
𝑤P
𝑤.
𝑤P
𝑤.
•
)
hence,
𝑦•
∇j
.
𝐿 𝑥, 𝜆 𝑦 =
𝑦P
𝑦.
•
×2
𝑣P
𝑣.
𝑣P
𝑣.
•
+
𝑤P
𝑤.
𝑤P
𝑤.
•
×
𝑦P
𝑦.
= 2× 𝑣P 𝑦P + 𝑣. 𝑦.
.
+ 𝑤P 𝑦P + 𝑤. 𝑦.
.
≥ 0
holds for any 𝑦 = 𝑦P, 𝑦. 𝜖ℛ.
.
If 𝑦•
∇j
.
𝐿 𝑥, 𝜆 𝑦 = 0 then 𝑣P 𝑦P + 𝑣. 𝑦. = 0 and 𝑤P 𝑦P + 𝑤. 𝑦. = 0.
For any 𝑦 = 𝑦P, 𝑦. 𝜖ℛ.
such that 𝑦P, 𝑦. ≠ 0 and 𝑦P + 𝑦. = 0, then 𝑦. = −𝑦P ≠ 0.
From 𝑣P 𝑦P + 𝑣. 𝑦. = 0 we have 𝑣P 𝑦P − 𝑣. 𝑦P = 𝑣P − 𝑣. 𝑦P = 0.
Note that 𝑦P ≠ 0, thus 𝑣P − 𝑣. = 0. Also, we can derive that 𝑤P − 𝑤. = 0.
As our proof of proposition 6, we find that the two assets are identical, which contradicts the
assumption. So 𝑦•
∇j
.
𝐿 𝑥, 𝜆 𝑦 > 0, i.e. 𝐿′′(𝑥, 𝜆) is a positive definite matrix at 𝑥 = 𝑥∗
, and
𝑥 = 𝑥∗
is a minimizer of the utility function in problem (18).
Generalized algorithm for n-asset problem
For n-asset selection problem, we can break it down into 3-asset or 2-asset problems as what
we have discussed and provide a generalized algorithm for it. This algorithm will terminate in
𝑜(𝑛`
) steps.
Step 1: Let 𝑐 ≔ +∞ and 𝑥€ ≔ [0, … ,0].
Step 2: Choose three points from the bag { 𝑣8, 𝑤8, 𝑢8 , i = 1, … , 𝑛} which have not been
considered yet. If there are no such points then go to Step 9, otherwise denote these three
points by 𝑣“, 𝑤“, 𝑢“ , 𝑣”, 𝑤”, 𝑢” and 𝑣•, 𝑤•, 𝑢• . Let 𝑣P, 𝑤P, 𝑢P ≔ 𝑣“, 𝑤“, 𝑢“ ,
𝑣., 𝑤., 𝑢. ≔ 𝑣”, 𝑤”, 𝑢” and 𝑣`, 𝑤`, 𝑢` ≔ 𝑣•, 𝑤•, 𝑢• .
17. 17
Step 3: If 𝑑𝑒𝑡
𝑞P 𝑟P
𝑞. 𝑟.
= 𝑑𝑒𝑡
𝑣P − 𝑣` 𝑤P − 𝑤`
𝑣. − 𝑣` 𝑤. − 𝑤`
= 0
then go to Step 2, otherwise go to Step 4.
Step 4: Compute the first two components, [𝑥P
∗
, 𝑥.
∗
], of the optimal solution to (11) using
equation (16).
Step 5: If [𝑥P
∗
, 𝑥.
∗
, 1 − 𝑥P
∗
− 𝑥.
∗
] > 0 then go to Step 6, otherwise go to Step 2.
Step 6: If 𝑈 𝑥P
∗
, 𝑥.
∗
, 1 − 𝑥P
∗
− 𝑥.
∗
< 𝑐 then go to Step 7, otherwise go to Step 2.
Step 7: Let 𝑐 = 𝑈 𝑥P
∗
, 𝑥.
∗
, 1 − 𝑥P
∗
− 𝑥.
∗
, and let
𝑥€ = [0, … ,0, 𝑥P
∗
“–—
, 0, … ,0, 𝑥.
∗
”–—
, 0, … ,0, 𝑥`
∗
•–—
, 0, … ,0]
Step 8: Go to Step 2.
Step 9: Choose two points from the bag { 𝑣8, 𝑤8, 𝑢8 , i = 1, … , 𝑛} which have not been
considered yet. If there are no such points then go to Step 16, otherwise denote these two
points by 𝑣“, 𝑤“, 𝑢“ and 𝑣”, 𝑤”, 𝑢” . Let 𝑣P, 𝑤P, 𝑢P ≔ 𝑣“, 𝑤“, 𝑢“ and 𝑣., 𝑤., 𝑢. ≔
𝑣”, 𝑤”, 𝑢” .
Step 10: If 𝑣P − 𝑣.
.
+ 𝑤P − 𝑤.
.
= 0 then go to Step 9, otherwise go to Step 11.
Step 11: Compute the first component, 𝑥P
∗
, of the optimal solution to (18) using equation (22).
Step 12: If 𝑥P
∗
, 𝑥.
∗
= 𝑥P
∗
, 1 − 𝑥P
∗
> 0 then go to Step 13, otherwise go to Step 9.
Step 13: If 𝑈 𝑥P
∗
, 1 − 𝑥P
∗
< 𝑐 then go to Step 14, otherwise go to Step 9.
Step 14: Let 𝑐 = 𝑈 𝑥P
∗
, 1 − 𝑥P
∗
, and let
𝑥€ = [0, … ,0, 𝑥P
∗
“–—
, 0, … ,0, 𝑥.
∗
”–—
, 0, … ,0]
Step 15: Go to Step 9.
Step 16: Choose a point from the bag { 𝑣8, 𝑤8, 𝑢8 , i = 1, … , 𝑛} which has not been considered
yet. If there is no such point then go to Step 20, otherwise denote this point by 𝑣“, 𝑤“, 𝑢“ .
Step 17: If 𝑈 𝑣“, 𝑤“, 𝑢“ = 𝑣“
.
+ 𝑤“
.
− 𝑢“ < 𝑐 then go to Step 18, otherwise go to Step 16.
Step 18: Let 𝑐 = 𝑈 𝑣“, 𝑤“, 𝑢“ = 𝑣“
.
+ 𝑤“
.
− 𝑢“, and let
𝑥€ = [0, … ,0, 1
“–—
, 0, … ,0]
Step 19: Go to Step 16.
Step 20: 𝑥€ is an optimal solution and – 𝑐 is the optimal value of the original portfolio
selection problem (8).
18. 18
Numerical illustration
We now use real-life data to demonstrate the proposed algorithm.
For simplicity, we consider a 3-asset problem. In order to alleviate the impact of correlation
between distinct assets, we look for companies from uncorrelated or less correlated industrial
sectors. Hence, we choose Facebook Inc. (FB), Exxon Mobil Corporation (XOM), and The
Coca-Cola Company (KO). Since Facebook held its initial public offering (IPO) on May 18,
2012, we pick monthly quotes of these three stocks from May, 2012 to April, 2016. All the
data are collected from http://finance.yahoo.com.
We first compute monthly rate of returns using the stock quotes by the following equation:
𝑟8X% = 100×
𝑃8XnP − 𝑃8X
𝑃8X
%
where 𝑟8X is the percentage of return on asset 𝑖. Note that in the utility function (see Section
2.1) we add up a scaling factor of 0.005 to avoid decimals, we now need to use percentages
rather than decimals of returns on the asset. There are 48 monthly stock quotes and thus we
can obtain 47 monthly percentage returns on each asset.
As 𝑟8 are assumed to be trapezoidal fuzzy variables with possibilistic distributions, we need
to figure out the exact trapezoidal forms. Normally, the researcher can use the Delphi Method
[4] to decide the trapezoidal form. In our illustration, we use the frequency statistic method
(see [3] Gupta et al, 2008) to estimate the trapezoidal fuzzy return rates.
19. 19
The percentage returns on Facebook Inc. (FB) can be graphed as:
From Fig. 3 we observe that most of the historical data fall into the intervals [−12.0, −4.0],
[−4.0, 4.0], [4.0, 12.0] and [12.0, 20.0]. We take the mid-points of the intervals
[−12.0, −4.0] and [12.0, 20.0] as the left and the right end points of the tolerance interval,
respectively. Thus, the tolerance interval of the fuzzy percentage returns is [−8.0, 16.0]. By
going through all the historical data, we find the minimum possible value -30.2 and the
maximum possible value 47.9 and view them as the limits of uncertain percentage returns in
the future, respectively. Therefore, the left spread is 22.2 and the right spread is 31.9, and the
trapezoidal percentage returns on FB is 𝑟P = [−8.0, 16.0, 22.2, 31.9].
Likewise, we can obtain the trapezoidal returns on XOM, which is 𝑟. = [−4.6, 3.8, 4.3, 7.5],
and KO, which is 𝑟` = [−4.5, 4.5, 3.9, 3.9].
Assume that 𝐴 = 2.46, we can calculate
𝑣P, 𝑤P, 𝑢P = (2.331, 0.707, 5.617),
𝑣., 𝑤., 𝑢. = (0.684, 0.154, 0.133),
𝑣`, 𝑤`, 𝑢` = (0.643, 0.102, 0.000).
First consider the 3-asset problem with 𝑣P, 𝑤P, 𝑢P , 𝑣., 𝑤., 𝑢. and 𝑣`, 𝑤`, 𝑢` .
Since 𝑑𝑒𝑡
𝑞P 𝑟P
𝑞. 𝑟.
= 𝑑𝑒𝑡
𝑣P − 𝑣` 𝑤P − 𝑤`
𝑣. − 𝑣` 𝑤. − 𝑤`
= 𝑑𝑒𝑡
1.688 0.605
0.041 0.052
= 0.063 ≠ 0,
20. 20
we get
𝑥P
∗
𝑥.
∗
=
1
𝑞P 𝑟. − 𝑟P 𝑞.
.
𝑞.
.
+ 𝑟.
.
− 𝑞P 𝑞. + 𝑟P 𝑟.
−(𝑞P 𝑞. + 𝑟P 𝑟.) 𝑞P
.
+ 𝑟P
. ×
1
2
𝑢P − 𝑢` − 𝑞P 𝑣` − 𝑟P 𝑤`
1
2
𝑢. − 𝑢` − 𝑞. 𝑣` − 𝑟. 𝑤`
=
1
0.063.
0.004 −0.100
−0.100 3.214
1.661
0.035
=
0.792
−13.5072
.
Notice that 𝑥.
∗
< 0, which is not feasible, then we found no qualified 3-asset candidate for an
optimal solution to (10).
Now we turn to all conceivable 2-asset problems:
○1 For the combination of FB and XOM,
since 𝑣P − 𝑣.
.
+ 𝑤P − 𝑤.
.
= 3.018 ≠ 0, we get
𝑥P
∗
=
1
𝑣P − 𝑣.
. + 𝑤P − 𝑤.
.
×
1
2
𝑢P − 𝑢. − 𝑣P − 𝑣. 𝑣. − 𝑤P − 𝑤. 𝑤.
=
1
3.018
×1.530 = 0.507
Thus, [0.507, 0.493, 0] is a qualified candidate for an optimal solution to (10), where
𝑈 0.507, 0.493, 0 = −0.417.
○2 For the combination of FB and KO,
since 𝑣P − 𝑣`
.
+ 𝑤P − 𝑤`
.
= 3.214 ≠ 0, we get
𝑥P
∗
=
1
𝑣P − 𝑣`
. + 𝑤P − 𝑤`
.
×
1
2
𝑢P − 𝑢` − 𝑣P − 𝑣` 𝑣` − 𝑤P − 𝑤` 𝑤`
=
1
3.214
×1.661 = 0.517
Thus, [0.517, 0, 0.483] is a qualified candidate for an optimal solution to (10), where
𝑈 0.517, 0, 0.483 = −0.435.
○3 For the combination of XOM and KO,
since 𝑣. − 𝑣`
.
+ 𝑤. − 𝑤`
.
= 0.004 ≠ 0, we get
𝑥P
∗
=
1
𝑣. − 𝑣`
. + 𝑤. − 𝑤`
.
×
1
2
𝑢. − 𝑢` − 𝑣. − 𝑣` 𝑣` − 𝑤. − 𝑤` 𝑤`
21. 21
=
1
0.004
×0.035 = 8.75 > 1
Thus, this cannot be a qualified candidate for an optimal solution to (10).
Finally, we compute the utility values of all the 1-asset options:
𝑈 1, 0, 0 = 𝑣P
.
+ 𝑤P
.
− 𝑢P = 0.316;
𝑈 0, 1, 0 = 𝑣.
.
+ 𝑤.
.
− 𝑢. = 0.359;
𝑈 0, 0, 1 = 𝑣`
.
+ 𝑤`
.
− 𝑢` = 0.424.
Comparing the function values of all feasible solutions we find that the optimal portfolio
would be 𝑥∗
= [0.517, 0, 0.483], i.e. the combination of Facebook (51.7%) and Coca-Cola
(48.3%).
Remarks on Carlsson-Fullér-Majlender’s model
6.1 Assumption of covariance
To calculate possibilistic variance of the linear combination of fuzzy variables, we shall use
the following theorem ([6] Sánta, 2012):
𝑉𝑎𝑟 𝜆] + 𝜆8 𝐴8
e
8fP
= 𝜆8
.
𝑉𝑎𝑟 𝐴8
e
8fP
+ 2 𝜆8 𝜆“ 𝐶𝑜𝑣 𝐴8, 𝐴“
e
8fP
𝑤ℎ𝑒𝑟𝑒 𝐴8 𝑎𝑟𝑒 𝑓𝑢𝑧𝑧𝑦 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠 𝑎𝑛𝑑 𝜆8 𝑎𝑟𝑒 𝑟𝑒𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟𝑠, 𝑖 = 1, … , 𝑛.
(23)
A simple two-variable theorem has also been presented in [5] (Carlsson, 2001).
From these definitions, we learn that we need to take the covariance terms into account when
calculating the variance of fuzzy number combinations.
Note that in Carlsson-Fullér-Majlender’s model when we derive possibilistic variance of the
whole portfolio, we actually ignore the intercorrelation between different assets. That is, we
assume the covariance to be zero. The subsequent discussion is based on this hypothesis. In
that case, Carlsson-Fullér-Majlender’s model only applies to cases where the optional assets
are uncorrelated or significantly less correlated. In fact, however, it is unreal to find assets
that are totally uncorrelated, so this model is not as applicable and effective as we might
expect. We need to pick out the assets from differentiating industrial sectors carefully to
comply with the zero-covariance assumption.
22. 22
6.2 Possibilistic variance of portfolio
Even if we rule out the covariance terms, there might still be some confusion in equation (7).
Note that 𝑥8 is a real number and 𝑟8 is a fuzzy number. Using the formula in (23), we can
derive the portfolio variance as
𝜎.
𝑟8 𝑥8
e
8fP
= 𝑥8
.
𝜎.
𝑟8
e
8fP
= 𝑥8
.
e
8fP
𝑏8 − 𝑎8
2
+
𝛼8 + 𝛽8
6
.
+
𝛼8 + 𝛽8
.
72
from equation 4
= 𝑥8
.
e
8fP
1
2
𝑏8 − 𝑎8 +
1
3
𝛼8 + 𝛽8
.
+
𝛼8 + 𝛽8
.
72
= 𝑥8
.
1
2
𝑏8 − 𝑎8 +
1
3
𝛼8 + 𝛽8
.e
8fP
+
𝛼8 + 𝛽8
.
𝑥8
.
72
e
8fP
=
1
2
𝑏8 − 𝑎8 +
1
3
𝛼8 + 𝛽8 𝑥8
.e
8fP
+
1
72
𝛼8 + 𝛽8 𝑥8
.
e
8fP
≠
1
2
𝑏8 − 𝑎8 +
1
3
𝛼8 + 𝛽8
e
8fP
𝑥8
.
+
1
72
𝛼8 + 𝛽8 𝑥8
e
8fP
.
which is given by equation (7).
If equation (7) is not true, neither is the rest of the discussion. The whole Carlsson-Fullér-
Majlender’s model, therefore, does not seem convincing to me. Considering that this is a
well-known model in fuzzy optimization, I am not sure if I have taken this the wrong way or
not.
If equation (7) is corrected to
𝜎.
𝑟8 𝑥8
e
8fP
=
1
2
𝑏8 − 𝑎8 +
1
3
𝛼8 + 𝛽8 𝑥8
.e
8fP
+
1
72
𝛼8 + 𝛽8 𝑥8
.
e
8fP
(24)
then the optimization problem turns into
max
jk
𝑈 𝑃 = 𝑢8 𝑥8
e
8fP
− 𝑣8 𝑥8
.
e
8fP
− 𝑤8 𝑥8
.
e
8fP
s. t. { 𝑥8
e
8fP
= 1, 𝑥8 ≥ 0 , 𝑖 = 1,2, … , 𝑛}
(25)
rather than problem (8), which is
max
jk
𝑈 𝑃 = 𝑢8 𝑥8
e
8fP
− 𝑣8
e
8fP
𝑥8
.
− 𝑤8
e
8fP
𝑥8
.
s. t. { 𝑥8
e
8fP
= 1, 𝑥8 ≥ 0 , 𝑖 = 1,2, … , 𝑛}
(8)
and thus the solutions will change correspondingly.
23. 23
6.3 Feasibility of the solution
As we disregard short-selling and long-buying, the feasible set of the solution should be
{𝑥8 𝜖ℛ: 𝑥8 𝜖 0,1 , 𝑖 = 1, … , 𝑛}. However, we do not include this condition into the constraints
of our optimization problem. The Carlsson-Fullér-Majlender’s model, in fact, computes the
not-necessarily feasible weights, so we need to check the feasibility every time we obtain a
candidate of the solution. This may cause some incovenience.
Conclusions
In this paper, we introduce the Carlsson-Fullér-Majlender’s trapezoidal possibility model to
address fuzzy portfolio selection problem. We devise a utility function based on portfolio
selection theory formulated by [7] (Markowitz, 1952). Using some properties of trapezoidal
fuzzy variable as well as optimization theory, we translate the optimization problem into a
non-linear prgramming problem, in which we can employ the Lagrange Multiplier Method
and Karush-Kuhn-Tucker (KKT) Conditions to calculate the optimal solutions. We provide a
generalized algorithm for the problem and then use some real data for illustration. We end the
paper with some personal thinking of the model, including its limitations or even some faults.
24. 24
References
[1] Carlsson, Christer, Robert Fullér, and Péter Majlender. "A possibilistic approach to
selecting portfolios with highest utility score." Fuzzy sets and systems 131.1 (2002):
13-21.
[2] Gupta, Pankaj, et al. Fuzzy Portfolio Optimization. Springer-Verlag, Berlin, 2014.
[3] Gupta, Pankaj, Mukesh Kumar Mehlawat, and Anand Saxena. "Asset portfolio
optimization using fuzzy mathematical programming." Information Sciences 178.6
(2008): 1734-1755.
[4] Linstone, Harold A., and Murray Turoff, eds. The Delphi method: Techniques and
applications. Vol. 29. Reading, MA: Addison-Wesley, 1975.
[5] Carlsson, Christer, and Robert Fullér. "On possibilistic mean value and variance of
fuzzy numbers." Fuzzy sets and systems 122.2 (2001): 315-326.
[6] Sánta, Katalin. "Portfolio Optimization with Fuzzy Constraints." 2012.
[7] Markowitz, Harry. "Portfolio selection." The journal of finance 7.1 (1952): 77-91.
[8] Markowitz, Harry M. Portfolio selection: efficient diversification of investments. Vol.
16. Yale university press, 1968.
[9] Zadeh, Lotfi A. "Fuzzy sets." Information and control 8.3 (1965): 338-353.
[10] Bellman, Richard E., and Lotfi Asker Zadeh. "Decision-making in a fuzzy
environment." Management science 17.4 (1970): B-141.