This document discusses endogenous benchmarking of mutual funds using bootstrap data envelopment analysis (DEA) in R. It aims to benchmark funds using multiple outputs, stochastic dominance indicators, and bootstrap analysis for robust evaluation. The study uses DEA with daily return mean and upside potential mean as outputs and return variance as the input to evaluate select sector funds over 6 months. Descriptive statistics of the technical efficiency scores from input-oriented, output-oriented, and graph hyperbolic DEA models are provided. Bootstrapping techniques including naive and smoothed bootstrap, bias correction, and confidence intervals are also introduced.
Universal Portfolios Generated by Reciprocal Functions of Price Relativesiosrjce
IOSR Journal of Mathematics(IOSR-JM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mathemetics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mathematics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Solution to Black-Scholes P.D.E. via Finite Difference Methods (MatLab)Fynn McKay
Simple implementable of Numerical Analysis to solve the famous Black-Scholes P.D.E. via Finite Difference Methods for the fair price of a European option.
Gentle Introduction to Dirichlet ProcessesYap Wooi Hen
This document provides an introduction to Dirichlet processes. It begins by motivating the need for nonparametric clustering when the number of clusters in the data is unknown. It then provides an overview of Dirichlet processes and discusses them from multiple perspectives, including samples from a Dirichlet process, the Chinese restaurant process representation, stick breaking construction, and formal definition. It also covers Dirichlet process mixtures and common inference techniques like Markov chain Monte Carlo and variational inference.
This document provides an overview of the topics covered in Unit V: Linear Programming. It begins with an introduction to operations research and some example problems that can be modeled as linear programs. It then discusses formulations of linear programs, including the standard and slack forms. The document outlines the simplex algorithm for solving linear programs and how to convert between standard and slack forms. It provides examples demonstrating these concepts. The key topics covered are linear programming models, formulations, and the simplex algorithm.
The document discusses various optimization methods for solving different types of optimization problems. It begins by defining a general optimization problem and then describes several specific problem types including linear programming (LP), integer programming (IP), mixed-integer linear programming (MILP), nonlinear programming (NLP), and mixed-integer nonlinear programming (MINLP). It provides examples and discusses solution methods like the simplex algorithm, branch and bound, and decomposition approaches.
In this paper, the black-litterman model is introduced to quantify investor’s views, then we expanded
the safety-first portfolio model under the case that the distribution of risk assets return is ambiguous. When
short-selling of risk-free assets is allowed, the model is transformed into a second-order cone optimization
problem with investor views. The ambiguity set parameters are calibrated through programming
The document discusses linear programming and the simplex method for solving linear programming problems. It begins with definitions of linear programming and its history. It then provides an example production planning problem that can be formulated as a linear programming problem. The document goes on to describe the standard form of a linear programming problem and terminology used. It explains how the simplex method works through iterative improvements to find the optimal solution. This is illustrated both geometrically and through an algebraic example solved using the simplex method.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Universal Portfolios Generated by Reciprocal Functions of Price Relativesiosrjce
IOSR Journal of Mathematics(IOSR-JM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mathemetics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mathematics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Solution to Black-Scholes P.D.E. via Finite Difference Methods (MatLab)Fynn McKay
Simple implementable of Numerical Analysis to solve the famous Black-Scholes P.D.E. via Finite Difference Methods for the fair price of a European option.
Gentle Introduction to Dirichlet ProcessesYap Wooi Hen
This document provides an introduction to Dirichlet processes. It begins by motivating the need for nonparametric clustering when the number of clusters in the data is unknown. It then provides an overview of Dirichlet processes and discusses them from multiple perspectives, including samples from a Dirichlet process, the Chinese restaurant process representation, stick breaking construction, and formal definition. It also covers Dirichlet process mixtures and common inference techniques like Markov chain Monte Carlo and variational inference.
This document provides an overview of the topics covered in Unit V: Linear Programming. It begins with an introduction to operations research and some example problems that can be modeled as linear programs. It then discusses formulations of linear programs, including the standard and slack forms. The document outlines the simplex algorithm for solving linear programs and how to convert between standard and slack forms. It provides examples demonstrating these concepts. The key topics covered are linear programming models, formulations, and the simplex algorithm.
The document discusses various optimization methods for solving different types of optimization problems. It begins by defining a general optimization problem and then describes several specific problem types including linear programming (LP), integer programming (IP), mixed-integer linear programming (MILP), nonlinear programming (NLP), and mixed-integer nonlinear programming (MINLP). It provides examples and discusses solution methods like the simplex algorithm, branch and bound, and decomposition approaches.
In this paper, the black-litterman model is introduced to quantify investor’s views, then we expanded
the safety-first portfolio model under the case that the distribution of risk assets return is ambiguous. When
short-selling of risk-free assets is allowed, the model is transformed into a second-order cone optimization
problem with investor views. The ambiguity set parameters are calibrated through programming
The document discusses linear programming and the simplex method for solving linear programming problems. It begins with definitions of linear programming and its history. It then provides an example production planning problem that can be formulated as a linear programming problem. The document goes on to describe the standard form of a linear programming problem and terminology used. It explains how the simplex method works through iterative improvements to find the optimal solution. This is illustrated both geometrically and through an algebraic example solved using the simplex method.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Black littleman portfolio optimizationHoang Nguyen
This document provides an overview and application of the Black-Litterman portfolio optimization model. It summarizes the key steps of the Black-Litterman model, which combines an investor's subjective views on expected returns with an implied equilibrium to determine optimal portfolio weights. The document then applies the Black-Litterman model to 10 stocks from the Ho Chi Minh City stock exchange in Vietnam over a one-year period. It finds that Black-Litterman portfolios achieved significantly better return-to-risk performance than the traditional mean-variance approach.
This document discusses various mathematical models used in finance to model stock prices and returns. It introduces random walk models, the lognormal model, general equilibrium theories, the Capital Asset Pricing Model (CAPM), and the Arbitrage Pricing Theory (ATP). The CAPM and ATP are equilibrium asset pricing models based on assumptions like rational investors seeking to maximize returns while minimizing risk.
This document discusses dynamic programming and provides examples to illustrate the technique. It begins by defining dynamic programming as a bottom-up approach to problem solving where solutions to smaller subproblems are stored and built upon to solve larger problems. It then provides examples of dynamic programming algorithms for calculating Fibonacci numbers, binomial coefficients, and finding shortest paths using Floyd's algorithm. The key aspects of dynamic programming like avoiding recomputing solutions and storing intermediate results in tables are emphasized.
NP-hard and NP-complete problems deal with the distinction between problems that can be solved in polynomial time versus those where no polynomial time algorithm is known. The document discusses key concepts like P vs NP problems, the theory of NP-completeness, nondeterministic algorithms, reducibility, Cook's theorem stating that satisfiability is in P if and only if P=NP, and examples of NP-hard graph problems like graph coloring. Cook's theorem shows that the satisfiability problem is in NP and is NP-complete, meaning that if any NP-complete problem can be solved in polynomial time, then NP would equal P.
Skiena algorithm 2007 lecture16 introduction to dynamic programmingzukun
This document summarizes a lecture on dynamic programming. It begins by introducing dynamic programming as a powerful tool for solving optimization problems on ordered items like strings. It then contrasts greedy algorithms, which make locally optimal choices, with dynamic programming, which systematically searches all possibilities while storing results. The document provides examples of computing Fibonacci numbers and binomial coefficients using dynamic programming by storing partial results rather than recomputing them. It outlines three key steps to applying dynamic programming: formulating a recurrence, bounding subproblems, and specifying an evaluation order.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
This presentation is trying to explain the Linear Programming in operations research. There is a software called "Gipels" available on the internet which easily solves the LPP Problems along with the transportation problems. This presentation is co-developed with Sankeerth P & Aakansha Bajpai.
By:-
Aniruddh Tiwari
Linkedin :- http://in.linkedin.com/in/aniruddhtiwari
IRJET- Analytic Evaluation of the Head Injury Criterion (HIC) within the Fram...IRJET Journal
This document presents an analytic evaluation of the Head Injury Criterion (HIC) within the framework of constrained optimization theory. The HIC is a weighted impulse function used to predict the probability of closed head injury based on measured head acceleration. Previous work analyzed the unclipped HIC function, but the clipped HIC formulation used in practice limits the evaluation window duration. The author develops analytic relationships for determining the window initiation and termination points to maximize the clipped HIC function. Example applications illustrate the general solutions for when head acceleration is defined by a single function or composite functions over the evaluation domain.
The document provides information about operations research (OR), including its phases, methodology, mathematical modeling, and linear programming (LP). It lists the 7 steps of the OR methodology as defining the problem, observing the system, formulating a mathematical model, verifying the model, selecting alternatives, presenting results, and implementing recommendations. It also discusses the components of a mathematical model, conditions for a linear programming model, major application areas of LP, and basic assumptions of LP. Finally, it provides examples and questions about LP modeling and solving using graphical and simplex methods.
The document discusses the Simplex method for solving linear programming problems involving profit maximization and cost minimization. It provides an overview of the concept and steps of the Simplex method, and gives an example of formulating and solving a farm linear programming model to maximize profits from two products. The document also discusses some complications that can arise in applying the Simplex method.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document presents a method for solving fuzzy assignment problems where costs are represented by linguistic variables and fuzzy numbers. Linguistic variables are used to convert qualitative cost data into quantitative fuzzy numbers. Yager's ranking method is applied to rank the fuzzy numbers, transforming the fuzzy assignment problem into a crisp one. The resulting crisp problem is then solved using the Hungarian method to find the optimal assignment that minimizes total cost. A numerical example demonstrates the approach, showing a fuzzy cost matrix converted to crisp values and solved. The method allows handling assignment problems with imprecise, qualitative cost data using fuzzy logic concepts.
Spark summit talk, july 2014 powered by revealDebasish Das
This document discusses using quadratic programming solvers for non-negative matrix factorization with Spark. It provides an overview of matrix factorization and how NMF can be formulated as a quadratic program. It then describes using ADMM and ECOS to solve the resulting QP, including implementations in Spark. Experimental results on movie recommendation datasets show the performance of different approaches for constraints like positivity, sparsity, and equality constraints. Future work areas include optimization and additional constrained convex minimization problems.
This document discusses dynamic programming techniques. It covers matrix chain multiplication and all pairs shortest paths problems. Dynamic programming involves breaking down problems into overlapping subproblems and storing the results of already solved subproblems to avoid recomputing them. It has four main steps - defining a mathematical notation for subproblems, proving optimal substructure, deriving a recurrence relation, and developing an algorithm using the relation.
The document summarizes key concepts related to the Black-Scholes partial differential equation. It introduces Black-Scholes, which revolutionized finance by finding the fair price of derivatives. The formula was derived from the heat equation and allowed investors to earn maximum profits without risk. It discusses the variables in the Black-Scholes equation like stock price, exercise price, volatility and risk-free rate. An example valuation of a call and put option is shown. The document also covers fundamental concepts like interest rates, probability, expected value, and continuous random variables.
This document provides an introduction to linear programming models. It discusses key components of linear models including decision variables, objective functions, and constraints. It then presents a prototype example of using linear programming to optimize production levels at Galaxy Industries. The optimal solution is found using Excel Solver and sensitivity analysis is performed to analyze how changes impact the optimal solution. Various scenarios where models may not have a unique optimal solution are also discussed.
QUESTION BANK FOR ANNA UNNIVERISTY SYLLABUSJAMBIKA
first of all i am very happy that the only university that keeps its blog updated. the habit of using algorithm analysis to justify design decisions when you write implement new algorithms and to compare the experimental performance .
Conditional random fields (CRFs) are probabilistic models for segmenting and labeling sequence data. CRFs address limitations of previous models like hidden Markov models (HMMs) and maximum entropy Markov models (MEMMs). CRFs allow incorporation of arbitrary, overlapping features of the observation sequence and label dependencies. Parameters are estimated to maximize the conditional log-likelihood using iterative scaling or tracking partial feature expectations. Experiments show CRFs outperform HMMs and MEMMs on synthetic and real-world tasks by addressing label bias problems and modeling dependencies beyond the previous label.
Iwsm2014 an analogy-based approach to estimation of software development ef...Nesma
The document discusses fuzzy analogy, a technique for software effort estimation that can handle categorical data. It introduces fuzzy analogy and fuzzy k-modes clustering. Fuzzy k-modes is used to cluster similar software projects from a repository based on categorical attributes into homogeneous groups. Fuzzy analogy then assesses the similarity between projects based on their membership to clusters and estimates the effort of a new project as a weighted average of similar past projects' efforts. The document evaluates fuzzy analogy on 194 projects from the ISBSG repository selected based on data quality and attributes criteria.
The document provides an overview of the Black-Scholes option pricing model (BSOPM). It describes the key assumptions of the BSOPM, including that the underlying stock pays no dividends, markets are efficient, and prices are lognormally distributed. It also outlines how the BSOPM can be used to calculate theoretical option prices from historical data on the stock price, strike price, time to expiration, interest rate, and volatility. The document discusses implied volatility and how it differs from historical volatility, as well as limitations of the BSOPM.
Dynamic programming is a mathematical optimization method and computer programming technique used to solve complex problems by breaking them down into simpler subproblems. It was developed by Richard Bellman in the 1950s and has been applied in many fields. Dynamic programming problems can be solved optimally by breaking them into subproblems with optimal substructures that can be solved recursively. It uses techniques like top-down or bottom-up approaches and storing results of subproblems to solve larger problems efficiently by avoiding recomputing the common subproblems. Multistage graphs are a type of problem well-suited for dynamic programming solutions using techniques like greedy algorithms, Dijkstra's algorithm, or dynamic programming to find shortest paths. Traversal and search algorithms like breadth-
This document discusses probabilistic error bounds for order reduction of smooth nonlinear models. It begins with motivation for using reduced order models (ROM) in computationally intensive applications and the need for error metrics. It then provides background on Dixon's theory for probabilistic error bounds, which has mostly been used for linear models. The document outlines snapshot and gradient-based reduction algorithms to reduce the response and parameter interfaces of a model. It defines different types of errors that can occur from reducing these interfaces and discusses propagating the errors across interfaces using Dixon's theory. Numerical tests and results are briefly mentioned along with conclusions.
Prpagation of Error Bounds Across reduction interfacesMohammad
This document summarizes the motivation, background, algorithms, and theory behind developing probabilistic error bounds for order reduction of smooth nonlinear models. It discusses how reduced order models (ROM) play an important role in computationally intensive applications and the need to provide error metrics with ROM predictions. It then describes snapshot and gradient-based reduction algorithms used at the response and parameter interfaces, respectively. It introduces different types of errors that can occur from reducing the response space only, parameter space only, or both spaces simultaneously, and how Dixon's theory can be used to estimate these relative errors.
Black littleman portfolio optimizationHoang Nguyen
This document provides an overview and application of the Black-Litterman portfolio optimization model. It summarizes the key steps of the Black-Litterman model, which combines an investor's subjective views on expected returns with an implied equilibrium to determine optimal portfolio weights. The document then applies the Black-Litterman model to 10 stocks from the Ho Chi Minh City stock exchange in Vietnam over a one-year period. It finds that Black-Litterman portfolios achieved significantly better return-to-risk performance than the traditional mean-variance approach.
This document discusses various mathematical models used in finance to model stock prices and returns. It introduces random walk models, the lognormal model, general equilibrium theories, the Capital Asset Pricing Model (CAPM), and the Arbitrage Pricing Theory (ATP). The CAPM and ATP are equilibrium asset pricing models based on assumptions like rational investors seeking to maximize returns while minimizing risk.
This document discusses dynamic programming and provides examples to illustrate the technique. It begins by defining dynamic programming as a bottom-up approach to problem solving where solutions to smaller subproblems are stored and built upon to solve larger problems. It then provides examples of dynamic programming algorithms for calculating Fibonacci numbers, binomial coefficients, and finding shortest paths using Floyd's algorithm. The key aspects of dynamic programming like avoiding recomputing solutions and storing intermediate results in tables are emphasized.
NP-hard and NP-complete problems deal with the distinction between problems that can be solved in polynomial time versus those where no polynomial time algorithm is known. The document discusses key concepts like P vs NP problems, the theory of NP-completeness, nondeterministic algorithms, reducibility, Cook's theorem stating that satisfiability is in P if and only if P=NP, and examples of NP-hard graph problems like graph coloring. Cook's theorem shows that the satisfiability problem is in NP and is NP-complete, meaning that if any NP-complete problem can be solved in polynomial time, then NP would equal P.
Skiena algorithm 2007 lecture16 introduction to dynamic programmingzukun
This document summarizes a lecture on dynamic programming. It begins by introducing dynamic programming as a powerful tool for solving optimization problems on ordered items like strings. It then contrasts greedy algorithms, which make locally optimal choices, with dynamic programming, which systematically searches all possibilities while storing results. The document provides examples of computing Fibonacci numbers and binomial coefficients using dynamic programming by storing partial results rather than recomputing them. It outlines three key steps to applying dynamic programming: formulating a recurrence, bounding subproblems, and specifying an evaluation order.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
This presentation is trying to explain the Linear Programming in operations research. There is a software called "Gipels" available on the internet which easily solves the LPP Problems along with the transportation problems. This presentation is co-developed with Sankeerth P & Aakansha Bajpai.
By:-
Aniruddh Tiwari
Linkedin :- http://in.linkedin.com/in/aniruddhtiwari
IRJET- Analytic Evaluation of the Head Injury Criterion (HIC) within the Fram...IRJET Journal
This document presents an analytic evaluation of the Head Injury Criterion (HIC) within the framework of constrained optimization theory. The HIC is a weighted impulse function used to predict the probability of closed head injury based on measured head acceleration. Previous work analyzed the unclipped HIC function, but the clipped HIC formulation used in practice limits the evaluation window duration. The author develops analytic relationships for determining the window initiation and termination points to maximize the clipped HIC function. Example applications illustrate the general solutions for when head acceleration is defined by a single function or composite functions over the evaluation domain.
The document provides information about operations research (OR), including its phases, methodology, mathematical modeling, and linear programming (LP). It lists the 7 steps of the OR methodology as defining the problem, observing the system, formulating a mathematical model, verifying the model, selecting alternatives, presenting results, and implementing recommendations. It also discusses the components of a mathematical model, conditions for a linear programming model, major application areas of LP, and basic assumptions of LP. Finally, it provides examples and questions about LP modeling and solving using graphical and simplex methods.
The document discusses the Simplex method for solving linear programming problems involving profit maximization and cost minimization. It provides an overview of the concept and steps of the Simplex method, and gives an example of formulating and solving a farm linear programming model to maximize profits from two products. The document also discusses some complications that can arise in applying the Simplex method.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document presents a method for solving fuzzy assignment problems where costs are represented by linguistic variables and fuzzy numbers. Linguistic variables are used to convert qualitative cost data into quantitative fuzzy numbers. Yager's ranking method is applied to rank the fuzzy numbers, transforming the fuzzy assignment problem into a crisp one. The resulting crisp problem is then solved using the Hungarian method to find the optimal assignment that minimizes total cost. A numerical example demonstrates the approach, showing a fuzzy cost matrix converted to crisp values and solved. The method allows handling assignment problems with imprecise, qualitative cost data using fuzzy logic concepts.
Spark summit talk, july 2014 powered by revealDebasish Das
This document discusses using quadratic programming solvers for non-negative matrix factorization with Spark. It provides an overview of matrix factorization and how NMF can be formulated as a quadratic program. It then describes using ADMM and ECOS to solve the resulting QP, including implementations in Spark. Experimental results on movie recommendation datasets show the performance of different approaches for constraints like positivity, sparsity, and equality constraints. Future work areas include optimization and additional constrained convex minimization problems.
This document discusses dynamic programming techniques. It covers matrix chain multiplication and all pairs shortest paths problems. Dynamic programming involves breaking down problems into overlapping subproblems and storing the results of already solved subproblems to avoid recomputing them. It has four main steps - defining a mathematical notation for subproblems, proving optimal substructure, deriving a recurrence relation, and developing an algorithm using the relation.
The document summarizes key concepts related to the Black-Scholes partial differential equation. It introduces Black-Scholes, which revolutionized finance by finding the fair price of derivatives. The formula was derived from the heat equation and allowed investors to earn maximum profits without risk. It discusses the variables in the Black-Scholes equation like stock price, exercise price, volatility and risk-free rate. An example valuation of a call and put option is shown. The document also covers fundamental concepts like interest rates, probability, expected value, and continuous random variables.
This document provides an introduction to linear programming models. It discusses key components of linear models including decision variables, objective functions, and constraints. It then presents a prototype example of using linear programming to optimize production levels at Galaxy Industries. The optimal solution is found using Excel Solver and sensitivity analysis is performed to analyze how changes impact the optimal solution. Various scenarios where models may not have a unique optimal solution are also discussed.
QUESTION BANK FOR ANNA UNNIVERISTY SYLLABUSJAMBIKA
first of all i am very happy that the only university that keeps its blog updated. the habit of using algorithm analysis to justify design decisions when you write implement new algorithms and to compare the experimental performance .
Conditional random fields (CRFs) are probabilistic models for segmenting and labeling sequence data. CRFs address limitations of previous models like hidden Markov models (HMMs) and maximum entropy Markov models (MEMMs). CRFs allow incorporation of arbitrary, overlapping features of the observation sequence and label dependencies. Parameters are estimated to maximize the conditional log-likelihood using iterative scaling or tracking partial feature expectations. Experiments show CRFs outperform HMMs and MEMMs on synthetic and real-world tasks by addressing label bias problems and modeling dependencies beyond the previous label.
Iwsm2014 an analogy-based approach to estimation of software development ef...Nesma
The document discusses fuzzy analogy, a technique for software effort estimation that can handle categorical data. It introduces fuzzy analogy and fuzzy k-modes clustering. Fuzzy k-modes is used to cluster similar software projects from a repository based on categorical attributes into homogeneous groups. Fuzzy analogy then assesses the similarity between projects based on their membership to clusters and estimates the effort of a new project as a weighted average of similar past projects' efforts. The document evaluates fuzzy analogy on 194 projects from the ISBSG repository selected based on data quality and attributes criteria.
The document provides an overview of the Black-Scholes option pricing model (BSOPM). It describes the key assumptions of the BSOPM, including that the underlying stock pays no dividends, markets are efficient, and prices are lognormally distributed. It also outlines how the BSOPM can be used to calculate theoretical option prices from historical data on the stock price, strike price, time to expiration, interest rate, and volatility. The document discusses implied volatility and how it differs from historical volatility, as well as limitations of the BSOPM.
Dynamic programming is a mathematical optimization method and computer programming technique used to solve complex problems by breaking them down into simpler subproblems. It was developed by Richard Bellman in the 1950s and has been applied in many fields. Dynamic programming problems can be solved optimally by breaking them into subproblems with optimal substructures that can be solved recursively. It uses techniques like top-down or bottom-up approaches and storing results of subproblems to solve larger problems efficiently by avoiding recomputing the common subproblems. Multistage graphs are a type of problem well-suited for dynamic programming solutions using techniques like greedy algorithms, Dijkstra's algorithm, or dynamic programming to find shortest paths. Traversal and search algorithms like breadth-
This document discusses probabilistic error bounds for order reduction of smooth nonlinear models. It begins with motivation for using reduced order models (ROM) in computationally intensive applications and the need for error metrics. It then provides background on Dixon's theory for probabilistic error bounds, which has mostly been used for linear models. The document outlines snapshot and gradient-based reduction algorithms to reduce the response and parameter interfaces of a model. It defines different types of errors that can occur from reducing these interfaces and discusses propagating the errors across interfaces using Dixon's theory. Numerical tests and results are briefly mentioned along with conclusions.
Prpagation of Error Bounds Across reduction interfacesMohammad
This document summarizes the motivation, background, algorithms, and theory behind developing probabilistic error bounds for order reduction of smooth nonlinear models. It discusses how reduced order models (ROM) play an important role in computationally intensive applications and the need to provide error metrics with ROM predictions. It then describes snapshot and gradient-based reduction algorithms used at the response and parameter interfaces, respectively. It introduces different types of errors that can occur from reducing the response space only, parameter space only, or both spaces simultaneously, and how Dixon's theory can be used to estimate these relative errors.
IRJET- Optimization of 1-Bit ALU using Ternary LogicIRJET Journal
This document summarizes a research paper that proposes a novel approach to implementing a 1-bit arithmetic logic unit (ALU) using ternary logic. Ternary logic offers potential advantages over binary logic, including reduced transistor count and hardware. The authors designed a 1-bit ALU using ternary logic gates (T-gates) for ternary arithmetic and logic operations. Simulation results showed the ternary logic ALU design achieved a 25% reduction in transistor usage compared to an equivalent binary logic ALU design. The ternary logic ALU design approach could potentially be extended to multi-bit ALUs for applications where reduced transistor count is important.
Asset Pricing and Portfolio Theory
I have presented a unique analysis which showcases the concepts of Aggregate & Aggregate lending and the numerical aspects of CAPM theory
The document proposes applying robust techniques like support vector clustering to portfolio optimization models to address uncertainties. It outlines constructing a robust semi-mean absolute deviation optimization model that uses support vector clustering to simulate an uncertainty set capturing uncertain asset returns from historical data. The methodology involves collecting market data, cleaning the data, training and testing the robust portfolio optimization model on different datasets and analyzing the results to capture uncertainties better than fixed uncertainty sets.
This document presents new certified optimal solutions found by the Charibde algorithm for six difficult benchmark optimization problems. Charibde combines an evolutionary algorithm and interval-based methods in a cooperative framework. It has achieved optimality proofs for five bound-constrained problems and one nonlinearly constrained problem. These problems are highly multimodal and some had not been solved before even with approximate methods. The document also compares Charibde's performance to other state-of-the-art solvers, showing it is highly competitive while providing reliable optimality proofs.
Error Estimates for Multi-Penalty Regularization under General Source Conditioncsandit
In learning theory, the convergence issues of the regression problem are investigated with
the least square Tikhonov regularization schemes in both the RKHS-norm and the L 2
-norm.
We consider the multi-penalized least square regularization scheme under the general source
condition with the polynomial decay of the eigenvalues of the integral operator. One of the
motivation for this work is to discuss the convergence issues for widely considered manifold
regularization scheme. The optimal convergence rates of multi-penalty regularizer is achieved
in the interpolation norm using the concept of effective dimension. Further we also propose
the penalty balancing principle based on augmented Tikhonov regularization for the choice of
regularization parameters. The superiority of multi-penalty regularization over single-penalty
regularization is shown using the academic example and moon data set.
IRJET-Debarred Objects Recognition by PFL OperatorIRJET Journal
This document discusses a method for recognizing debarred objects like pistols, knives, and handguns in x-ray luggage scans using Partial Fuzzy Logic (PFL). PFL is used to estimate the degree of similarity between scanned objects and prohibited items. The method involves segmenting objects from x-ray images, then applying PFL to aggregate information and determine if an object matches prohibited items based on weighted criteria. PFL lies between logical "and" and "or" to provide a parameterized aggregation. The document tests the method on sample x-ray images to recognize knives and other banned objects.
Improving Returns from the Markowitz Model using GA- AnEmpirical Validation o...idescitation
Portfolio optimization is the task of allocating the investors capital among
different assets in such a way that the returns are maximized while at the same time, the
risk is minimized. The traditional model followed for portfolio optimization is the
Markowitz model [1], [2],[3]. Markowitz model, considering the ideal case of linear
constraints, can be solved using quadratic programming, however, in real-life scenario, the
presence of nonlinear constraints such as limits on the number of assets in the portfolio, the
constraints on budgetary allocation to each asset class, transaction costs and limits to the
maximum weightage that can be assigned to each asset in the portfolio etc., this problem
becomes increasingly computationally difficult to solve, ie NP-hard. Hence, soft computing
based approaches seem best suited for solving such a problem. An attempt has been made in
this study to use soft computing technique (specifically, Genetic Algorithms), to overcome
this issue. In this study, Genetic Algorithm (GA) has been used to optimize the parameters
of the Markowitz model such that overall portfolio returns are maximized with the standard
deviation of the returns being minimized at the same time. The proposed system is validated
by testing its ability to generate optimal stock portfolios with high returns and low standard
deviations with the assets drawn from the stocks traded on the Bombay Stock Exchange
(BSE). Results show that the proposed system is able to generate much better portfolios
when compared to the traditional Markowitz model.
Measuring the behavioral component of financial fluctuation: an analysis bas...SYRTO Project
Measuring the behavioral component of financial fluctuation: an analysis based on the S&P500 - Caporin M., Corazzini L., Costola M. June, 27 2013. IFABS 2013 - Posters session.
The document describes using the Runge-Kutta numerical method to analyze the Ramsey-Cass-Koopmans model of economic growth. It shows that the 4th order Runge-Kutta method approximates solutions to differential equations with increasing accuracy as the step size decreases. When applied to the Ramsey-Cass-Koopmans model, the method generates phase diagrams showing different trajectories for capital and consumption depending on initial conditions. The analysis confirms the Runge-Kutta method provides a reliable way to approximate the dynamics of the Ramsey-Cass-Koopmans model economy.
This document discusses portfolio optimization and different algorithms used to solve portfolio optimization problems. It begins by formulating the unconstrained and constrained portfolio optimization problems. For the unconstrained problem, it uses quadratic programming to generate the efficient frontier. For the constrained problem, it uses mixed integer quadratic programming and heuristic algorithms like genetic algorithm, tabu search and simulated annealing. It compares the results of these different algorithms and concludes some perform better than others in terms of accuracy and time complexity for portfolio optimization problems with constraints.
An Efficient And Safe Framework For Solving Optimization ProblemsLisa Muthukumar
This document describes a new optimization framework called QuadOpt that combines interval analysis techniques with safe linear relaxations to provide rigorous and efficient global optimization. QuadOpt uses consistency techniques from QuadSolver to reduce variable domains and computes a safe lower bound on a linear relaxation of the problem. It performs branch and bound search to rigorously bound the global optimum. Experimental results on test problems show that QuadOpt provides certified solutions with fewer splits than other rigorous methods while being faster than nonsafe solvers.
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGICVLSICS Design
In VLSI technology, designers main concentration were on area required and on performance of the
device. In VLSI design power consumption is one of the major concerns due to continuous increase in chip
density and decline in size of CMOS circuits and frequency at which circuits are operating. By considering
these parameter logical circuits are designed using quaternary voltage mode logic and quaternary current
mode logic. Power consumption required for quaternary voltage mode logic is 51.78 % less as compared
to binary . Area in terms of number of transistor required for quaternary voltage mode logic is 3 times
more as compared to binary. As quaternary voltage mode circuit required large area as compared to
quaternary current mode circuit but power consumption required in quaternary voltage mode circuit is less
than that required in quaternary current mode circuit .
COVARIANCE ESTIMATION AND RELATED PROBLEMS IN PORTFOLIO OPTIMICruzIbarra161
COVARIANCE ESTIMATION AND RELATED PROBLEMS IN PORTFOLIO OPTIMIZATION
Ilya Pollak
Purdue University
School of Electrical and Computer Engineering
West Lafayette, IN 47907
USA
ABSTRACT
This overview paper reviews covariance estimation problems and re-
lated issues arising in the context of portfolio optimization. Given
several assets, a portfolio optimizer seeks to allocate a fixed amount
of capital among these assets so as to optimize some cost function.
For example, the classical Markowitz portfolio optimization frame-
work defines portfolio risk as the variance of the portfolio return,
and seeks an allocation which minimizes the risk subject to a target
expected return. If the mean return vector and the return covariance
matrix for the underlying assets are known, the Markowitz problem
has a closed-form solution.
In practice, however, the expected returns and the covariance
matrix of the returns are unknown and are therefore estimated from
historical data. This introduces several problems which render the
Markowitz theory impracticable in real portfolio management appli-
cations. This paper discusses these problems and reviews some of
the existing literature on methods for addressing them.
Index Terms— Covariance, estimation, portfolio, market, fi-
nance, Markowitz
1. INTRODUCTION
The return of a security between trading day t1 and trading day t2
is defined as the change in the closing price over this time period,
divided by the closing price on day t1. For example, the daily (i.e.,
one-day) return on trading day t is defined as (p(t)−p(t−1))/p(t−
1) where p(t) is the closing price on day t and p(t−1) is the closing
price on the previous trading day. Note that if t is a Monday or the
day after a holiday, the previous trading day will not be the same as
the previous calendar day.
Suppose an investment is made into N assets whose return vec-
tor is R, modeled as a random vector with expected return µ =
E[R] and covariance matrix Λ = E[(R − µ)(R − µ)T ]. In other
words, R = (R(1), . . . , R(N))T where R(n) is the return of the n-th
asset. It is assumed throughout the paper that the covariance matrix
Λ is invertible. This assumption is realistic, since it is quite unusual
in practice to have a set of assets whose linear combination has re-
turns exactly equal to zero. Even if an investment universe contained
such a set, the number of assets in the universe could be reduced to
eliminate the linear dependence and make the covariance matrix in-
vertible.
Out of these N assets, a portfolio is formed with allocation
weights w = (w(1), . . . , w(N))T . The n-th weight is defined as the
amount invested into the n-th asset, as a fraction of the overall invest-
ment into the portfolio: if the overall investment into the portfolio is
$D, and $D(n) is invested into the n-th asset, then w(n) = D(n)/D.
Therefore, by definition, the weights sum to one:
w
T
1 = 1, (1)
where 1 is an N -vector of ones. Note that some of the weights may
be negative, ...
The document discusses system approach and optimization in civil engineering. It defines optimization as making something as fully functional or effective as possible. The system approach applies quantitative methods and tools of optimization to problem solving and decision making. Some applications of optimization and system approach in civil engineering include designing structures like frames and bridges for minimum cost, designing structures for minimum weight given load conditions, designing water resource systems, and more. The document also discusses linear programming, nonlinear programming, and other optimization methods used in operations research. It provides examples to explain concepts like convex and concave functions.
Relevance of Particle Swarm Optimization Technique for the Solution of Econom...IRJET Journal
This document presents the use of particle swarm optimization (PSO) technique to solve the economic load dispatch (ELD) problem in power systems. The ELD problem aims to schedule power plant generation outputs to meet load demand at minimum operating cost while satisfying constraints. PSO is applied by initializing generator outputs as "particles" that fly through search space to find minimum cost. Results on 5-unit and 6-unit test systems show PSO able to determine optimal outputs to meet time-varying loads at lowest cost within constraints.
A Robust Method Based On LOVO Functions For Solving Least Squares ProblemsDawn Cook
The document presents a new robust method for solving least squares problems based on Lower Order-Value Optimization (LOVO) functions. The method combines a Levenberg-Marquardt algorithm adapted for LOVO problems with a voting schema to estimate the number of possible outliers without requiring it as a parameter. Numerical results show the algorithm is able to detect and ignore outliers to find better model fits to data compared to other robust algorithms.
Slides were formed by referring to the text Machine Learning by Tom M Mitchelle (Mc Graw Hill, Indian Edition) and by referring to Video tutorials on NPTEL
Slides by Alexander März:
The language of statistics is of probabilistic nature. Any model that falls short of providing quantification of the uncertainty attached to its outcome is likely to provide an incomplete and potentially misleading
picture. While this is an irrevocable consensus in statistics, machine
learning approaches usually lack proper ways of quantifying uncertainty. In fact, a possible distinction between the two modelling cultures can be
attributed to the (non)-existence of uncertainty estimates that allow for,
e.g., hypothesis testing or the construction of estimation/prediction
intervals. However, quantification of uncertainty in general and
probabilistic forecasting in particular doesn’t just provide an average
point forecast, but it rather equips the user with a range of outcomes and the probability of each of those occurring.
In an effort of bringing both disciplines closer together, the audience is
introduced to a new framework of XGBoost that predicts the entire
conditional distribution of a univariate response variable. In particular,
XGBoostLSS models all moments of a parametric distribution (i.e., mean,
location, scale and shape [LSS]) instead of the conditional mean only.
Choosing from a wide range of continuous, discrete and mixed
discrete-continuous distribution, modelling and predicting the entire
conditional distribution greatly enhances the flexibility of XGBoost, as it
allows to gain additional insight into the data generating process, as well
as to create probabilistic forecasts from which prediction intervals and
quantiles of interest can be derived. As such, XGBoostLSS contributes to
the growing literature on statistical machine learning that aims at
weakening the separation between Breiman‘s „Data Modelling Culture“ and „Algorithmic Modelling Culture“, so that models designed mainly for
prediction can also be used to describe and explain the underlying data
generating process of the response of interest.
In a tight labour market, job-seekers gain bargaining power and leverage it into greater job quality—at least, that’s the conventional wisdom.
Michael, LMIC Economist, presented findings that reveal a weakened relationship between labour market tightness and job quality indicators following the pandemic. Labour market tightness coincided with growth in real wages for only a portion of workers: those in low-wage jobs requiring little education. Several factors—including labour market composition, worker and employer behaviour, and labour market practices—have contributed to the absence of worker benefits. These will be investigated further in future work.
Fabular Frames and the Four Ratio ProblemMajid Iqbal
Digital, interactive art showing the struggle of a society in providing for its present population while also saving planetary resources for future generations. Spread across several frames, the art is actually the rendering of real and speculative data. The stereographic projections change shape in response to prompts and provocations. Visitors interact with the model through speculative statements about how to increase savings across communities, regions, ecosystems and environments. Their fabulations combined with random noise, i.e. factors beyond control, have a dramatic effect on the societal transition. Things get better. Things get worse. The aim is to give visitors a new grasp and feel of the ongoing struggles in democracies around the world.
Stunning art in the small multiples format brings out the spatiotemporal nature of societal transitions, against backdrop issues such as energy, housing, waste, farmland and forest. In each frame we see hopeful and frightful interplays between spending and saving. Problems emerge when one of the two parts of the existential anaglyph rapidly shrinks like Arctic ice, as factors cross thresholds. Ecological wealth and intergenerational equity areFour at stake. Not enough spending could mean economic stress, social unrest and political conflict. Not enough saving and there will be climate breakdown and ‘bankruptcy’. So where does speculative design start and the gambling and betting end? Behind each fabular frame is a four ratio problem. Each ratio reflects the level of sacrifice and self-restraint a society is willing to accept, against promises of prosperity and freedom. Some values seem to stabilise a frame while others cause collapse. Get the ratios right and we can have it all. Get them wrong and things get more desperate.
Independent Study - College of Wooster Research (2023-2024) FDI, Culture, Glo...AntoniaOwensDetwiler
"Does Foreign Direct Investment Negatively Affect Preservation of Culture in the Global South? Case Studies in Thailand and Cambodia."
Do elements of globalization, such as Foreign Direct Investment (FDI), negatively affect the ability of countries in the Global South to preserve their culture? This research aims to answer this question by employing a cross-sectional comparative case study analysis utilizing methods of difference. Thailand and Cambodia are compared as they are in the same region and have a similar culture. The metric of difference between Thailand and Cambodia is their ability to preserve their culture. This ability is operationalized by their respective attitudes towards FDI; Thailand imposes stringent regulations and limitations on FDI while Cambodia does not hesitate to accept most FDI and imposes fewer limitations. The evidence from this study suggests that FDI from globally influential countries with high gross domestic products (GDPs) (e.g. China, U.S.) challenges the ability of countries with lower GDPs (e.g. Cambodia) to protect their culture. Furthermore, the ability, or lack thereof, of the receiving countries to protect their culture is amplified by the existence and implementation of restrictive FDI policies imposed by their governments.
My study abroad in Bali, Indonesia, inspired this research topic as I noticed how globalization is changing the culture of its people. I learned their language and way of life which helped me understand the beauty and importance of cultural preservation. I believe we could all benefit from learning new perspectives as they could help us ideate solutions to contemporary issues and empathize with others.
South Dakota State University degree offer diploma Transcriptynfqplhm
办理美国SDSU毕业证书制作南达科他州立大学假文凭定制Q微168899991做SDSU留信网教留服认证海牙认证改SDSU成绩单GPA做SDSU假学位证假文凭高仿毕业证GRE代考如何申请南达科他州立大学South Dakota State University degree offer diploma Transcript
Falcon stands out as a top-tier P2P Invoice Discounting platform in India, bridging esteemed blue-chip companies and eager investors. Our goal is to transform the investment landscape in India by establishing a comprehensive destination for borrowers and investors with diverse profiles and needs, all while minimizing risk. What sets Falcon apart is the elimination of intermediaries such as commercial banks and depository institutions, allowing investors to enjoy higher yields.
An accounting information system (AIS) refers to tools and systems designed for the collection and display of accounting information so accountants and executives can make informed decisions.
University of North Carolina at Charlotte degree offer diploma Transcripttscdzuip
办理美国UNCC毕业证书制作北卡大学夏洛特分校假文凭定制Q微168899991做UNCC留信网教留服认证海牙认证改UNCC成绩单GPA做UNCC假学位证假文凭高仿毕业证GRE代考如何申请北卡罗莱纳大学夏洛特分校University of North Carolina at Charlotte degree offer diploma Transcript
Discover the Future of Dogecoin with Our Comprehensive Guidance36 Crypto
Learn in-depth about Dogecoin's trajectory and stay informed with 36crypto's essential and up-to-date information about the crypto space.
Our presentation delves into Dogecoin's potential future, exploring whether it's destined to skyrocket to the moon or face a downward spiral. In addition, it highlights invaluable insights. Don't miss out on this opportunity to enhance your crypto understanding!
https://36crypto.com/the-future-of-dogecoin-how-high-can-this-cryptocurrency-reach/
OJP data from firms like Vicinity Jobs have emerged as a complement to traditional sources of labour demand data, such as the Job Vacancy and Wages Survey (JVWS). Ibrahim Abuallail, PhD Candidate, University of Ottawa, presented research relating to bias in OJPs and a proposed approach to effectively adjust OJP data to complement existing official data (such as from the JVWS) and improve the measurement of labour demand.
Dr. Alyce Su Cover Story - China's Investment Leadermsthrill
In World Expo 2010 Shanghai – the most visited Expo in the World History
https://www.britannica.com/event/Expo-Shanghai-2010
China’s official organizer of the Expo, CCPIT (China Council for the Promotion of International Trade https://en.ccpit.org/) has chosen Dr. Alyce Su as the Cover Person with Cover Story, in the Expo’s official magazine distributed throughout the Expo, showcasing China’s New Generation of Leaders to the World.
Optimizing Net Interest Margin (NIM) in the Financial Sector (With Examples).pdfshruti1menon2
NIM is calculated as the difference between interest income earned and interest expenses paid, divided by interest-earning assets.
Importance: NIM serves as a critical measure of a financial institution's profitability and operational efficiency. It reflects how effectively the institution is utilizing its interest-earning assets to generate income while managing interest costs.
TEST BANK Principles of cost accounting 17th edition edward j vanderbeck mari...Donc Test
TEST BANK Principles of cost accounting 17th edition edward j vanderbeck maria r mitchell.docx
TEST BANK Principles of cost accounting 17th edition edward j vanderbeck maria r mitchell.docx
TEST BANK Principles of cost accounting 17th edition edward j vanderbeck maria r mitchell.docx
TEST BANK Principles of cost accounting 17th edition edward j vanderbeck mari...
Ibs gurgaon-8 th ncm
1. ENDOGENOUS BENCHMARKING OF MUTUAL FUNDS WITH
BOOTSTRAP DEA IN ‘R’: SOME INDIAN EVIDENCE
DR.RAM PRATAP SINHA
ASSOCIATE PROFESSOR OF ECONOMICS
GOVERNMENT COLLEGE OF ENGINEERING AND
LEATHER TECHNOLOGY
BLOCK-LB,SECTOR-III,SALT LAKE,KOLKATA-700098
E mail:
rampratapsinha39@gmail.com, rp1153@rediffmail.com
EIGHTH NATIONAL CONFERENCE ON INDIAN
CAPITAL MARKETS-2014
1
2. Introduction
• The performance of mutual funds is generally evaluated in the
context of a risk-return framework and the conceptual basis
was provided by Markowitz (1952,1959) and Sharpe-Lintner
(1964).
• The M-V framework provided by Markowitz permitted to find
out a set of minimum variance portfolios corresponding to a
given/target rate of return. The CAPM framework , on the
other hand, linked the excess return from a portfolio to the
excess return available from the market portfolio thereby
permitting exogenous benchmarking.
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
2
3. Objective of the Study
The study extends the traditional framework of mutual fund
benchmarking in three directions:
(a) Use of multiple output indicators
(b) Incorporation of stochastic dominance indicators to apply
in more general cases
(c) Use of bootstrap analysis to enable more robust
evaluation of performance
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
3
4. The Mean-Variance Criteria
One of the earliest attempt towards portfolio benchmarking
was by Markowitz (1952) and Tobin(1958) in the form of the
mean-variance criterion.
The basic idea behind the mean-variance approach is that the
optimal portfolio for an investor is not simply any collection of
securities but a balanced portfolio which provides the investor
with the best combination of return and risk where return is
measured by the expected value and risk is measured by the
variance of the probability distribution of portfolio return.
Given two discrete return distributions f(x) and g(x) , investors
will prefer F(x) over F(G) if µF≥µG and VarF ≤VarG (not both
equalities holding simultaneously).
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
4
5. The M-V Utility Function
• Markowitz pointed out that in the context of risk aversion, a
quadratic of the form a+bR+cR2 provides a close
approximation of a smooth and concave utility function. In
this case, maximization of expected utility implies:
Max E[U(R)]= Max [a+ bµ +c E(R2)] =Max [a +b µ +c (µ2+σ2)]
Where µ =expected value of R and σ2 =
variance of R. Therefore, this investor will
choose his portfolio solely on the basis of
the mean and variance of R.
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
5
6. The Mean -Variance Criteria
• In order to understand how the mean-variance criteria
operates, let us consider the case of an n security portfolio
where the returns from the n securities are denoted by r1,r2,--
-,rn and σr
2.the expected return from the portfolio is
µ= r1ω1+r2 ω 2+…..+rn ωn
= ∑riωi
= rω and σp
2 = ωTσr
2 ω
• Where r is a column vector of returns corresponding to the n
securities and ω is a row vector of weights relating to the n
securities included in the portfolio.
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
6
7. Minimising Risk Relative to a Target
Rate of Return
• Suppose the portfolio manager/investor wants to minimize
risk relative to a target rate of return µT. Then the optimization
program of the investor is:
Min ½ ωTσr
2 ω
• Subject to rω =µT (the target rate of return) and eT ω=1
Where e is a row vector of whose all elements are unity.
• For solving the problem, we form the Lagrangean
L=½ ωTσr
2 ω +λ1(rω- µT)+ λ2(eT ω-1)
• The first order conditions of minimization give us n+2
equations (including the two constraint equations) to solve for
n unknowns (the n weights- ω1, ω2,-----, ωn).
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
7
8. Maximising Return Relating to a
Target Level Risk
• The optimization problem of the investor now becomes
Max µ, where µ= r1ω1+r2 ω 2+…..+rn ωn
= rω
Subject to, σp
2= σT
2 (the target level of variance) and
eT ω=1
• The problem can be solved as before by forming a Lagrangean
function.
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
8
9. Minimizing Risk and Maximising Return
• The investor can incorporate the two objectives of maximizing
return and minimizing risk in to a single objective function as:
Min (½ ωTσr
2 ω-λ rω)
Subject to eT ω=1
• For λ>0, the term -λ rω seeks to push rω upwards to
counterbalance the downward pull in respect of ½ ωTσr
2 ω.
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
9
10. Extension to Non-normal Cases
• Hadar and Russell (1969) pointed out that excepting some
special cases (like the quadratic utility function), the
specification of distributions in terms of their moments is not
likely to yield strong results as information about the
moments can not be used efficiently for the purpose of
ordering uncertain prospects in a situation where the utility
function is unknown.
• In this context, Hadar and Russell proposed two decision rules
based on stochastic dominance(ordering) which are stronger
than the moment method.
• In order to provide a very brief introduction to the concept of
stochastic dominance, let us consider a random variable x
taking the values xi. Let f and g denote the probability
functions of x and F(xi) and G(xi) be the respective cumulative
distributions.
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
10
11. Concept of Stochastic Dominance
• First Order Stochastic Dominance (FSD):
In our example elaborated above, f(x) dominates g(x) if F(x)≤G(x)
for all xiϵX. Hadar and Russell proved that under this rule
distributions may be ordered according to preference under any
utility functions.
• Second Order Stochastic Dominance (SSD):
The second rule is weaker than the first rule. In the discrete case
second order stochastic dominance implies that f(x) dominates
g(x) if ∑rG(xi) Δxi ≤ ∑rF(xi) Δxi for all r<n where xn is the largest
value taken by the random variable and Δxi=xi+1- xi . Under
SSD, distributions may be ordered for any utility function which
exhibits non-increasing marginal utility everywhere.
• Third Order Stochastic Dominance(TSD):
Whitmore(1970) introduced the concept of third degree
stochastic dominance as follows: f(x) dominates g(x) if ∑rG(xi)(
Δxi)2 ≤ ∑rF(xi)( Δxi)2 for all r<n where xn is the largest value taken
by the random variable and Δxi=xi+1- xi.
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
11
12. Stochastic Dominance, Downside Risk &
Finance Literature
• The concept of downside risk in the context of portfolio evaluation could be
found in Roy (1952).
• However, path-breaking development in the field of downside risk measures
occurred with the development of the Lower Partial Moment (LPM) risk
measure by Bawa (1975) and Fishburn (1977). Bawa (1975) was the first to
define lower partial moment (LPM) as a general family of below-target risk
measures provided a proof that the LPM measure is mathematically related
to stochastic dominance for risk tolerance values of 0, 1, and 2.This model
was later further generalised by Fishburn who formulated the conditions for
identifying optimal and dominated choice sets i.e. Conditional Stochastic
Dominance which enables the decomposition of the choice set in to optimal
and dominated sets. EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
12
13. Portfolio Evaluation –A Distance
Function Approach
• In the context of multi-criteria portfolio evaluation, Shephard’s
(1953,1970) distance function approach provides a sound
conceptual basis for the derivation of evaluation criteria. The idea
is invoked from a multi-input multi-output production system
where distance function provide a functional characterisation of
the structure of production technology.
• The input set of the production technology is characterised by the
input distance function while the output set is characterised by
the output distance function.
• We consider a technology T using a nonnegative vector of inputs
X=(x1,x2,......,xn) Rn
+ to produce a nonnegative vector of outputs
Y=(y1,y2,......,ym) Rm
+ . In functional terms, they can be related as:
Y=P(X) and X=L(Y)
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
13
14. Input Distance Function
• Given this, an input distance function can be defined as
Dinput= Max*λ:X/λ L(Y)].Intuitively speaking, an input distance
function gives the maximum amount by which the producer’s
input vector can be radially contracted and yet remain
feasible for the output vector it produces. The reciprocal of
the input distance function can be considered as the radial
measure of input oriented technical efficiency. Using DEA we
can compute input oriented technical efficiency as:
Minimise µ
Subject to: µx0-Xλ≥0, y0≤Yλ, j=1,λ≥0
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
14
15. Output Distance Function
• An output distance function can be defined as Doutput=
Min[μ:Y/λ,f(X)].
• Intuitively speaking, an output distance function gives the
minimum amount by which the producer’s output vector can be
deflated and yet remain feasible for the input vector it uses. The
output distance function can be considered as the radial measure
of output oriented technical efficiency.
• The output oriented technical efficiency is calculated from:
Max vrs
• subject to yo Y, Xo X, j=1,j 0
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
15
16. Graph Hyperbolic Approach
• This implies maximisation of return and
minimisation of risk at the same time.
Consequently, the optimization problem for
the observed mutual fund is:
• Min G
• Subject to: Gx0≥ Xλ, 1/G y0≤Yλ, λ≥0
• In the VRS case we add the additional
convexity condition j=1. Technical
efficiency= G
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
16
17. Introduction to Bootstrap
• Efron (1979) introduced the concept of bootstrap.
• Bootstrap involves resampling from an original sample of data
through computer-based simulations to obtain the sampling
properties of random variables.
• The starting point of any bootstrap procedure is a sample of
observed data X = {x1, x2, . . . , xn} drawn randomly from some
population with an unknown probability distribution f .
• The premise of the bootstrap method is that the random sample
actually drawn “mimics” its parent population.
• The bootstrap method suggested by Efron (1979) involves drawing
of sample (with replacement) directly from the observed data and
is known as naive bootstrap.
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
17
18. Naïve vs Smoothed Bootstrap
• The bootstrap method suggested by Efron (1979)
involves drawing of sample (with replacement) directly
from the observed data and is known as naive
bootstrap.
• In this case the bootstrap sample is effectively drawn
from a discrete population which fails to recognise the
fact that the underlying population density function f is
continuous.
• Simar and Wilson (1998) suggested that the problem
could be overcome by resorting to smoothed bootstrap
which involves resampling via a fitted model.
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
18
19. Smoothed Bootstrap
• The smoothed bootstrap methodology involves the use of Kernel
estimators as weight functions.
• If we write the naive bootstrap sample as Xnbs ={x1*, x2*,......, xn*}
and the smoothed bootstrap sample as Xsbs ={x1**, x2**,......, xn**}
then the elements of the two are related to each other in the
following manner: xi**=xi*+hϵ ~f, where h is the smoothing
parameter for the density function while xi* and xi** represent
the ith elements of the naive and smoothed bootstrap samples.
• In case of bootstrapping, every time when we replicate the
bootstrap sample, we get a different sample X**, we will also get
a different estimate of θ* = θ(X**). Thus, we need to select a
large number of bootstrap samples, B, in order to extract as many
combinations of xj ( j = 1, 2, . . . , n) as possible.
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
19
20. Steps in Bootstrapping
• The steps followed in bootstrapping are briefly as follows:
• (a)Compute the technical efficiency θ from the observed sample
X.
• (b)Select rth (r = 1, 2, . . . ,B) independent bootstrap sample X∗
r
, which consists of n data values drawn with replacement from the
observed sample X. From this, compute the naïve bootstrap.
• (c) Compute the statistic θsb = θ(X**
sb ) from the rth bootstrap
sample X**
b
• (d) Construct pseudo-data from the smoothed bootstrap
efficiency scores and compute technical efficiency
• (e) Repeat steps (b),(c) and (d) a large number of times (say, N
times).
• (f) Calculate the average of the bootstrap estimate as the
arithmetic mean (θe).
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
20
21. Bias Correction
• A measure of the accuracy of an estimator θe
of the parameter θ is the bias measure E( )-
θ. The bias-corrected estimator is: θbc = -
bias.In our case, we compute bias =θe- θ .
• Thus the bias corrected estimated technical
efficiency is :θbc=2 -θe
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
21
22. Confidence Interval
• By calculating the standard deviation of
technical efficiency scores, we can also
calculate the upper and lower bounds of
technical efficiency with lower and upper
bounds being 2.5% and 97.5% .
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
22
23. Inputs And Outputs And Period of Study
• In the present study we make use of the distance function approach to
benchmark select sectoral mutual fund schemes on the basis of
information collected for a half-year.
• The distinguishing feature of the study is that it uses stochastic
dominance output indicator.
• Thus mean daily return and mean upside potential (an indicator of
second order stochastic dominance) as the two outputs whereas
variance of return is taken as the input indicator.
• Thus the input-output correspondence in the present study is:
• Output [Mean daily Return, Mean Upside Potential] = f(Standard
Deviation)
• The requisite information about daily NAV for the in-sample mutual fund
schemes have been collected from AMFI website and the calculations
regarding mean return, mean upside potential and standard deviation
have been calculated by the author.
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
23
24. Descriptive Statistics of Technical
Efficiency Scores
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
Particulars
Input Oriented
Model
Output Oriented
Model
Graph Hyperbolic
Model
Mean Technical
efficiency
(Uncorrected
Estimation)
0.963 0.980 0.987
Standard Deviation 0.0299 0.0161 0.0105
Mean Technical
efficiency (Bias
Corrected
Estimation)
0.942 0.969 0.897
Standard Deviation 0.0256 0.0129 0.1972
24
25. Confidence Interval of Technical Efficiency
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
Confidence Interval
Input Oriented
Model
Output Oriented
Model
Graph Hyperbolic
Model
Lower Bound
(2.5%)
0.9148 0.9529 0.9255
Upper Bound
(97.5%)
0.9618 0.9799 0.9312
25
26. Summing Up
• In the present study, 16 sectoral mutual fund
schemes have been evaluated for the second
half of 2010 using the concepts of input
oriented, output oriented and graph hyperbolic
measure.
• For the purpose of performance
benchmarking, the present study makes both
point and bootstrap estimates of performance.
The bootstrap measures have been used to
correct bias in pointed technical efficiency
scores.
EIGHTH NATIONAL CONFERENCE ON
INDIAN CAPITAL MARKETS-2014
26