This document summarizes research on strong duality analysis for discrete-time constrained portfolio optimization problems. It begins by introducing the mathematical formulation of a discrete-time portfolio selection model with constraints expressed as convex inequalities. It then discusses a risk neutral computational approach based on embedding the primal constrained problem into a family of unconstrained problems in auxiliary markets. Weak duality is shown to hold, relating the optimal values of the primal and auxiliary problems. The document defines a dual problem, known as Pliska's κ dual, that seeks to minimize the optimal values of the auxiliary problems. Conditions for strong duality are presented, under which the optimal solution to the dual problem also solves the primal constrained problem.
This document discusses branch and bound algorithms and NP-hard and NP-complete problems. It provides examples and proofs related to these topics.
1) Branch and bound is an algorithm that systematically enumerates candidate solutions by discarding subsets that are provably suboptimal. The knapsack problem is used as an example problem.
2) NP-hard and NP-complete problems are those whose best known algorithms run in non-polynomial time. If a problem can be solved in polynomial time, then all NP-complete problems can be. Proving problems NP-complete involves reducing other known NP-complete problems to the target problem.
3) Trees are connected graphs without cycles. They are used to represent hierarchies and
This document discusses P, NP and NP-complete problems. It begins by introducing tractable and intractable problems, and defines problems that can be solved in polynomial time as tractable, while problems that cannot are intractable. It then discusses the classes P and NP, with P containing problems that can be solved deterministically in polynomial time, and NP containing problems that can be solved non-deterministically in polynomial time. The document concludes by defining NP-complete problems as those in NP that are as hard as any other problem in the class, in that any NP problem can be reduced to an NP-complete problem in polynomial time.
This document provides definitions and explanations of key concepts in algorithm design and analysis including:
- Performance measurement is concerned with obtaining the space and time requirements of algorithms.
- An algorithm is a finite set of instructions that accomplishes a task given certain inputs and criteria.
- Time complexity refers to the amount of computer time needed for an algorithm to complete, while space complexity refers to the memory required.
- Common asymptotic notations like Big-O, Omega, and Theta are used to describe an algorithm's scalability.
- Divide-and-conquer and greedy algorithms are important design techniques that break problems into subproblems.
The document discusses the theory of NP-completeness. It begins by defining the complexity classes P, NP, NP-hard, and NP-complete. It then explains the concepts of reduction and how none of the NP-complete problems can be solved in polynomial time deterministically. The document provides examples of NP-complete problems like satisfiability (SAT), vertex cover, and the traveling salesman problem. It shows how nondeterministic algorithms can solve these problems and how they can be transformed into SAT instances. Finally, it proves that SAT is the first NP-complete problem by showing it is in NP and NP-hard.
This document discusses NP-hard and NP-complete problems. It begins by defining the classes P, NP, NP-hard, and NP-complete. It then provides examples of NP-hard problems like the traveling salesperson problem, satisfiability problem, and chromatic number problem. It explains that to show a problem is NP-hard, one shows it is at least as hard as another known NP-hard problem. The document concludes by discussing how restricting NP-hard problems can result in problems that are solvable in polynomial time.
This document provides an overview of complexity theory concepts including:
- Asymptotic notation like Big-O, Big-Omega, and Big-Theta for analyzing algorithm runtime.
- The difference between deterministic and non-deterministic algorithms, with deterministic algorithms always providing the same output for a given input, and non-deterministic algorithms possibly providing different outputs.
- The classes P and NP, with P containing problems solvable in polynomial time by a deterministic algorithm, and NP containing problems verifiable in polynomial time by a non-deterministic algorithm.
- NP-complete problems being the hardest problems in NP, with examples like the knapsack problem, Hamiltonian path problem, and Boolean satisfiability problem.
Algorithm Design and Complexity - Course 6Traian Rebedea
This document provides an overview of algorithm design and complexity. It discusses different classes of problems including P vs NP problems. P problems can be solved in polynomial time, while NP problems can be verified in polynomial time but may not be solvable in polynomial time. NP-hard problems are at least as hard as NP problems, and NP-complete problems are NP-hard problems that are also in NP. The document describes techniques for solving difficult problems like backtracking and discusses examples like the n-queens problem.
This file contains the concepts of Class P, Class NP, NP- completeness, Travelling Salesman Person problem, Clique Problem, Vertex cover problem, Hamiltonian problem, FFT and DFT.
This document discusses branch and bound algorithms and NP-hard and NP-complete problems. It provides examples and proofs related to these topics.
1) Branch and bound is an algorithm that systematically enumerates candidate solutions by discarding subsets that are provably suboptimal. The knapsack problem is used as an example problem.
2) NP-hard and NP-complete problems are those whose best known algorithms run in non-polynomial time. If a problem can be solved in polynomial time, then all NP-complete problems can be. Proving problems NP-complete involves reducing other known NP-complete problems to the target problem.
3) Trees are connected graphs without cycles. They are used to represent hierarchies and
This document discusses P, NP and NP-complete problems. It begins by introducing tractable and intractable problems, and defines problems that can be solved in polynomial time as tractable, while problems that cannot are intractable. It then discusses the classes P and NP, with P containing problems that can be solved deterministically in polynomial time, and NP containing problems that can be solved non-deterministically in polynomial time. The document concludes by defining NP-complete problems as those in NP that are as hard as any other problem in the class, in that any NP problem can be reduced to an NP-complete problem in polynomial time.
This document provides definitions and explanations of key concepts in algorithm design and analysis including:
- Performance measurement is concerned with obtaining the space and time requirements of algorithms.
- An algorithm is a finite set of instructions that accomplishes a task given certain inputs and criteria.
- Time complexity refers to the amount of computer time needed for an algorithm to complete, while space complexity refers to the memory required.
- Common asymptotic notations like Big-O, Omega, and Theta are used to describe an algorithm's scalability.
- Divide-and-conquer and greedy algorithms are important design techniques that break problems into subproblems.
The document discusses the theory of NP-completeness. It begins by defining the complexity classes P, NP, NP-hard, and NP-complete. It then explains the concepts of reduction and how none of the NP-complete problems can be solved in polynomial time deterministically. The document provides examples of NP-complete problems like satisfiability (SAT), vertex cover, and the traveling salesman problem. It shows how nondeterministic algorithms can solve these problems and how they can be transformed into SAT instances. Finally, it proves that SAT is the first NP-complete problem by showing it is in NP and NP-hard.
This document discusses NP-hard and NP-complete problems. It begins by defining the classes P, NP, NP-hard, and NP-complete. It then provides examples of NP-hard problems like the traveling salesperson problem, satisfiability problem, and chromatic number problem. It explains that to show a problem is NP-hard, one shows it is at least as hard as another known NP-hard problem. The document concludes by discussing how restricting NP-hard problems can result in problems that are solvable in polynomial time.
This document provides an overview of complexity theory concepts including:
- Asymptotic notation like Big-O, Big-Omega, and Big-Theta for analyzing algorithm runtime.
- The difference between deterministic and non-deterministic algorithms, with deterministic algorithms always providing the same output for a given input, and non-deterministic algorithms possibly providing different outputs.
- The classes P and NP, with P containing problems solvable in polynomial time by a deterministic algorithm, and NP containing problems verifiable in polynomial time by a non-deterministic algorithm.
- NP-complete problems being the hardest problems in NP, with examples like the knapsack problem, Hamiltonian path problem, and Boolean satisfiability problem.
Algorithm Design and Complexity - Course 6Traian Rebedea
This document provides an overview of algorithm design and complexity. It discusses different classes of problems including P vs NP problems. P problems can be solved in polynomial time, while NP problems can be verified in polynomial time but may not be solvable in polynomial time. NP-hard problems are at least as hard as NP problems, and NP-complete problems are NP-hard problems that are also in NP. The document describes techniques for solving difficult problems like backtracking and discusses examples like the n-queens problem.
This file contains the concepts of Class P, Class NP, NP- completeness, Travelling Salesman Person problem, Clique Problem, Vertex cover problem, Hamiltonian problem, FFT and DFT.
Quantum Minimax Theorem in Statistical Decision Theory (RIMS2014)tanafuyu
This is almost self-contained explanation of our recent result. The contents are based on our talk in the RIMS2014 conference.
Recently, many fundamental and important results in statistical decision theory have been extended to the quantum system. Quantum Hunt-Stein theorem and quantum locally asymptotic normality are typical successful examples.
In our recent preprint, we show quantum minimax theorem, which is also an extension of a well-known result, minimax theorem in statistical decision theory, first shown by Wald and generalized by LeCam. Our assertions hold for every closed convex set of measurements and for general parametric models of density operator. On the other hand, Bayesian analysis based on least favorable priors has been widely used in classical statistics and is expected to play a crucial role in quantum statistics. According to this trend, we also show the existence of least favorable priors, which seems to be new even in classical statistics.
This document discusses the P versus NP problem in complexity theory. The P class contains problems that can be solved quickly by algorithms, while NP contains problems that can be verified quickly given a proposed solution. It is unknown whether NP problems can also be solved quickly (meaning P=NP), or if finding solutions to NP problems requires exponential time (meaning P≠NP). Solving this problem could have significant impacts on fields like cryptography, optimization, and theorem proving. However, despite extensive research, the P versus NP problem remains unsolved.
P, NP, NP-Complete, and NP-Hard
Reductionism in Algorithms
NP-Completeness and Cooks Theorem
NP-Complete and NP-Hard Problems
Travelling Salesman Problem (TSP)
Travelling Salesman Problem (TSP) - Approximation Algorithms
PRIMES is in P - (A hope for NP problems in P)
Millennium Problems
Conclusions
The document presents a new block cipher that blends concepts from the modified Feistel cipher and advanced Hill cipher. The cipher uses an involutory key matrix K to encrypt plaintext matrices P and Q through iterative applications of mixing, permutation, and XOR operations per equations 1.1 and 1.2. Cryptanalysis shows the cipher is strong as the encryption equations are nonlinear and functions like Shift() and Mix() cause diffusion in each round. The encryption and decryption processes are illustrated through flowcharts and algorithms.
https://telecombcn-dl.github.io/2017-dlsl/
Winter School on Deep Learning for Speech and Language. UPC BarcelonaTech ETSETB TelecomBCN.
The aim of this course is to train students in methods of deep learning for speech and language. Recurrent Neural Networks (RNN) will be presented and analyzed in detail to understand the potential of these state of the art tools for time series processing. Engineering tips and scalability issues will be addressed to solve tasks such as machine translation, speech recognition, speech synthesis or question answering. Hands-on sessions will provide development skills so that attendees can become competent in contemporary data analytics tools.
Np completeness-Design and Analysis of Algorithms adeel990
NP-completeness refers to problems that are both in the complexity class NP and NP-hard. These problems are considered intractable because they have no known efficient polynomial-time solutions. The document lists several classic NP-complete problems, including Boolean satisfiability, the traveling salesman problem, and Hamiltonian path problem. It also defines polynomial-time tractability and explains that NP-complete problems are harder than problems that are provably intractable.
1) NP-Completeness refers to problems that are in NP (can be verified in polynomial time) and are as hard as any problem in NP.
2) The first problem proven to be NP-Complete was the Circuit Satisfiability problem, which asks whether there exists an input assignment that makes a Boolean circuit output 1.
3) To prove a problem P is NP-Complete, it must be shown that P is in NP and that any problem in NP can be reduced to P in polynomial time. This establishes P as at least as hard as any problem in NP.
In most of the algorithms analyzed until now, we have been looking and studying problems solvable in polynomial time. The polynomial time algorithm class P are algorithms that on inputs of size n have a worst case running time of O(n^k) for some constant k. Thus, informally, we can say that the Non-Polynomial (NP) time algorithms are the ones that cannot be solved in O(n^k) for any constant k
.
This document provides an introduction to NP-completeness, including: definitions of key concepts like decision problems, classes P and NP, and polynomial time reductions; examples of NP-complete problems like satisfiability and the traveling salesman problem; and approaches to dealing with NP-complete problems like heuristic algorithms, approximation algorithms, and potential help from quantum computing in the future. The document establishes NP-completeness as a central concept in computational complexity theory.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes and critiques the public-key cryptosystem proposed by Wagner and Magyarik in 1984. The summary identifies 5 main critiques of their system: 1) It is too vague in its general form, 2) Their concrete example is also vague and insecure, 3) It can generate "spurious keys", 4) It is actually based on the word choice problem rather than the word problem, and 5) It does not constitute a full cryptosystem but rather a starting point for further research. The document then proposes an alternative public-key cryptosystem based on transformation groups, using a group with a known coNP-complete word problem.
The document summarizes the concepts of P vs NP complexity classes. It states that P problems can be solved in polynomial time, like searching an array, while NP problems are solved in non-deterministic polynomial time, like the knapsack problem. It then defines different types of algorithms and complexity classes. The key classes discussed are P, NP, NP-Complete, and NP-Hard. It provides examples like sorting being in P, while the Hamiltonian problem is NP-Complete. A graphical representation is also included to illustrate the relationships between the complexity classes.
Linked CP Tensor Decomposition (presented by ICONIP2012)Tatsuya Yokota
This document proposes a new method called Linked Tensor Decomposition (LTD) to analyze common and individual factors from a group of tensor data. LTD combines the advantages of Individual Tensor Decomposition (ITD), which analyzes individual characteristics, and Simultaneous Tensor Decomposition (STD), which analyzes common factors in a group. LTD represents each tensor as the sum of a common factor and individual factors. An algorithm using Hierarchical Alternating Least Squares is developed to solve the LTD model. Experiments on toy problems and face reconstruction demonstrate LTD can extract both common and individual factors more effectively than ITD or STD alone. Future work will explore Tucker-based LTD and statistical independence in the LTD model
The presentation outlines an approach for invariant-free clausal temporal resolution. It introduces temporal logic and its role in modeling dynamic systems. The temporal logic PLTL is described, as well as existing techniques for clausal resolution and clausal normal forms. The presentation proposes an invariant-free approach to temporal resolution and discusses ongoing and future work.
NP completeness. Classes P and NP are two frequently studied classes of problems in computer science. Class P is the set of all problems that can be solved by a deterministic Turing machine in polynomial time.
The document introduces the concepts of P, NP, and NP-completeness. It states that P problems can be solved in polynomial time, while NP problems can be verified in polynomial time using a non-deterministic algorithm. NP-complete problems are the hardest problems in NP and if any NP-complete problem could be solved in polynomial time, then all NP problems could be solved that way. As an example, it describes how the Traveling Salesman Problem is NP-complete as the number of possible routes grows exponentially with the number of cities.
The document discusses pricing the Margrabe option using Monte Carlo simulation and an explicit closed-form solution. It begins by defining the Margrabe option and explaining its use. It then presents Margrabe's closed-form solution, which prices the option as a European call using a change of numeraire approach. Next, it analyzes the option's sensitivity to various parameters. Finally, it outlines different option pricing methods and focuses on Monte Carlo simulation and the change of numeraire approach.
This document provides an introduction to algorithms and their analysis. It defines what an algorithm is and discusses different aspects of analyzing algorithm performance, including time complexity, space complexity, asymptotic analysis using Big O, Big Theta, and Big Omega notations. It also covers greedy algorithms, their characteristics, and examples like the knapsack problem. Greedy algorithms make locally optimal choices at each step without reconsidering prior decisions. Not all problems can be solved greedily, and the document discusses when greedy algorithms can and cannot be applied.
Quantum Minimax Theorem in Statistical Decision Theory (RIMS2014)tanafuyu
This is almost self-contained explanation of our recent result. The contents are based on our talk in the RIMS2014 conference.
Recently, many fundamental and important results in statistical decision theory have been extended to the quantum system. Quantum Hunt-Stein theorem and quantum locally asymptotic normality are typical successful examples.
In our recent preprint, we show quantum minimax theorem, which is also an extension of a well-known result, minimax theorem in statistical decision theory, first shown by Wald and generalized by LeCam. Our assertions hold for every closed convex set of measurements and for general parametric models of density operator. On the other hand, Bayesian analysis based on least favorable priors has been widely used in classical statistics and is expected to play a crucial role in quantum statistics. According to this trend, we also show the existence of least favorable priors, which seems to be new even in classical statistics.
This document discusses the P versus NP problem in complexity theory. The P class contains problems that can be solved quickly by algorithms, while NP contains problems that can be verified quickly given a proposed solution. It is unknown whether NP problems can also be solved quickly (meaning P=NP), or if finding solutions to NP problems requires exponential time (meaning P≠NP). Solving this problem could have significant impacts on fields like cryptography, optimization, and theorem proving. However, despite extensive research, the P versus NP problem remains unsolved.
P, NP, NP-Complete, and NP-Hard
Reductionism in Algorithms
NP-Completeness and Cooks Theorem
NP-Complete and NP-Hard Problems
Travelling Salesman Problem (TSP)
Travelling Salesman Problem (TSP) - Approximation Algorithms
PRIMES is in P - (A hope for NP problems in P)
Millennium Problems
Conclusions
The document presents a new block cipher that blends concepts from the modified Feistel cipher and advanced Hill cipher. The cipher uses an involutory key matrix K to encrypt plaintext matrices P and Q through iterative applications of mixing, permutation, and XOR operations per equations 1.1 and 1.2. Cryptanalysis shows the cipher is strong as the encryption equations are nonlinear and functions like Shift() and Mix() cause diffusion in each round. The encryption and decryption processes are illustrated through flowcharts and algorithms.
https://telecombcn-dl.github.io/2017-dlsl/
Winter School on Deep Learning for Speech and Language. UPC BarcelonaTech ETSETB TelecomBCN.
The aim of this course is to train students in methods of deep learning for speech and language. Recurrent Neural Networks (RNN) will be presented and analyzed in detail to understand the potential of these state of the art tools for time series processing. Engineering tips and scalability issues will be addressed to solve tasks such as machine translation, speech recognition, speech synthesis or question answering. Hands-on sessions will provide development skills so that attendees can become competent in contemporary data analytics tools.
Np completeness-Design and Analysis of Algorithms adeel990
NP-completeness refers to problems that are both in the complexity class NP and NP-hard. These problems are considered intractable because they have no known efficient polynomial-time solutions. The document lists several classic NP-complete problems, including Boolean satisfiability, the traveling salesman problem, and Hamiltonian path problem. It also defines polynomial-time tractability and explains that NP-complete problems are harder than problems that are provably intractable.
1) NP-Completeness refers to problems that are in NP (can be verified in polynomial time) and are as hard as any problem in NP.
2) The first problem proven to be NP-Complete was the Circuit Satisfiability problem, which asks whether there exists an input assignment that makes a Boolean circuit output 1.
3) To prove a problem P is NP-Complete, it must be shown that P is in NP and that any problem in NP can be reduced to P in polynomial time. This establishes P as at least as hard as any problem in NP.
In most of the algorithms analyzed until now, we have been looking and studying problems solvable in polynomial time. The polynomial time algorithm class P are algorithms that on inputs of size n have a worst case running time of O(n^k) for some constant k. Thus, informally, we can say that the Non-Polynomial (NP) time algorithms are the ones that cannot be solved in O(n^k) for any constant k
.
This document provides an introduction to NP-completeness, including: definitions of key concepts like decision problems, classes P and NP, and polynomial time reductions; examples of NP-complete problems like satisfiability and the traveling salesman problem; and approaches to dealing with NP-complete problems like heuristic algorithms, approximation algorithms, and potential help from quantum computing in the future. The document establishes NP-completeness as a central concept in computational complexity theory.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes and critiques the public-key cryptosystem proposed by Wagner and Magyarik in 1984. The summary identifies 5 main critiques of their system: 1) It is too vague in its general form, 2) Their concrete example is also vague and insecure, 3) It can generate "spurious keys", 4) It is actually based on the word choice problem rather than the word problem, and 5) It does not constitute a full cryptosystem but rather a starting point for further research. The document then proposes an alternative public-key cryptosystem based on transformation groups, using a group with a known coNP-complete word problem.
The document summarizes the concepts of P vs NP complexity classes. It states that P problems can be solved in polynomial time, like searching an array, while NP problems are solved in non-deterministic polynomial time, like the knapsack problem. It then defines different types of algorithms and complexity classes. The key classes discussed are P, NP, NP-Complete, and NP-Hard. It provides examples like sorting being in P, while the Hamiltonian problem is NP-Complete. A graphical representation is also included to illustrate the relationships between the complexity classes.
Linked CP Tensor Decomposition (presented by ICONIP2012)Tatsuya Yokota
This document proposes a new method called Linked Tensor Decomposition (LTD) to analyze common and individual factors from a group of tensor data. LTD combines the advantages of Individual Tensor Decomposition (ITD), which analyzes individual characteristics, and Simultaneous Tensor Decomposition (STD), which analyzes common factors in a group. LTD represents each tensor as the sum of a common factor and individual factors. An algorithm using Hierarchical Alternating Least Squares is developed to solve the LTD model. Experiments on toy problems and face reconstruction demonstrate LTD can extract both common and individual factors more effectively than ITD or STD alone. Future work will explore Tucker-based LTD and statistical independence in the LTD model
The presentation outlines an approach for invariant-free clausal temporal resolution. It introduces temporal logic and its role in modeling dynamic systems. The temporal logic PLTL is described, as well as existing techniques for clausal resolution and clausal normal forms. The presentation proposes an invariant-free approach to temporal resolution and discusses ongoing and future work.
NP completeness. Classes P and NP are two frequently studied classes of problems in computer science. Class P is the set of all problems that can be solved by a deterministic Turing machine in polynomial time.
The document introduces the concepts of P, NP, and NP-completeness. It states that P problems can be solved in polynomial time, while NP problems can be verified in polynomial time using a non-deterministic algorithm. NP-complete problems are the hardest problems in NP and if any NP-complete problem could be solved in polynomial time, then all NP problems could be solved that way. As an example, it describes how the Traveling Salesman Problem is NP-complete as the number of possible routes grows exponentially with the number of cities.
The document discusses pricing the Margrabe option using Monte Carlo simulation and an explicit closed-form solution. It begins by defining the Margrabe option and explaining its use. It then presents Margrabe's closed-form solution, which prices the option as a European call using a change of numeraire approach. Next, it analyzes the option's sensitivity to various parameters. Finally, it outlines different option pricing methods and focuses on Monte Carlo simulation and the change of numeraire approach.
This document provides an introduction to algorithms and their analysis. It defines what an algorithm is and discusses different aspects of analyzing algorithm performance, including time complexity, space complexity, asymptotic analysis using Big O, Big Theta, and Big Omega notations. It also covers greedy algorithms, their characteristics, and examples like the knapsack problem. Greedy algorithms make locally optimal choices at each step without reconsidering prior decisions. Not all problems can be solved greedily, and the document discusses when greedy algorithms can and cannot be applied.
1) The document presents a model for estimating the earnings of a mobile communication network using sensitivity analysis.
2) The model uses five parameters related to mobile traffic intensity - number of users, number of calls, call duration, initial cost per call duration, and price per call duration - to estimate profits under different scenarios.
3) Sensitivity analysis via tornado graphs show that profits are most sensitive to the number of calls and call duration. This suggests companies should focus on increasing these factors to maximize earnings.
Make Impression combining video and printMikko Niemelä
This document contains a series of photo credits attributed to different photographers, with the photos seemingly unrelated. At the end, it encourages the reader to get started creating their own presentation on SlideShare, suggesting this document serves as inspiration for including various photos from different sources.
This document summarizes a study that investigated the use of spliced swimmer bars as shear reinforcement in reinforced concrete beams. Three beams were tested: a control beam with standard stirrups, a beam with welded swimmer bars, and a beam with spliced swimmer bars. The spliced swimmer bars were a new type of shear reinforcement consisting of small inclined bars spliced to the longitudinal flexural steel bars. Testing results found that the beam with spliced swimmer bars exhibited similar shear strength and failure mode to the beam with welded swimmer bars, both performing better than the control beam. Cracks were monitored as the load increased.
This document appears to be contact information for a business located at Velazquez 15, 2D in Madrid, Spain. The phone number is 91.737.32.84 and the website is www.gestionemocional.com.
Este documento presenta el concepto de educación y sus fines según la Ley General de Educación de México. Define la educación como un proceso permanente que contribuye al desarrollo del individuo y la transformación de la sociedad, siendo un factor para la adquisición de conocimientos y la formación de personas con sentido de solidaridad social. Los fines de la educación son desarrollar armónicamente las facultades humanas, fomentar el amor a la patria y la solidaridad internacional. La educación debe orientarse por el progreso científico, la l
The document presents a study that implemented segmentation and classification techniques for mammogram images to detect breast cancer malignancy. It used Gray Level Difference Method (GLDM) and Gabor texture feature extraction methods with Support Vector Machine (SVM) and K-Nearest Neighbors (K-NN) classifiers. The results showed that GLDM features with SVM achieved the best classification accuracy of 95.83%, outperforming the other combinations. The study concluded the GLDM and SVM approach provided the most effective classification of mammogram images.
This document provides an overview of IEEE standards for mobile ad hoc networks (MANETs). It discusses the evolution of MANETs and the key characteristics including dynamic topologies and limited bandwidth. The document describes the MANET architecture including enabling technologies, networking layers, and applications/middleware. It then focuses on explaining the IEEE 802.11 standards for wireless local area networks, comparing 802.11a, 802.11b, and 802.11g in terms of channels, data rates, frequencies/modulation, range/density, and compatibility. The purpose is to survey the IEEE standards that help enable ad hoc networking capabilities.
This document summarizes a study that investigated the effects of maturity level and drying methods on rheological and physicochemical properties of reconstituted breadfruit flour. Freshly harvested mature and immature breadfruits were dried using three methods - oven, sun, and biomass fuelled dryer, then milled into flour. Proximate analysis showed sun-dried flour had the highest protein content for both mature and immature samples. Drying and reconstitution reduced antinutritional content. Maturity and drying method had little effect on softness index. Sensory evaluation found sun and oven-dried samples were most acceptable. For high quality flour, the study recommends biomass drying of mature fruits due to higher protein and
This document discusses the design optimization of a cam shaft angle monitoring system for industrial improvements. It begins by introducing cam shafts and their importance in properly timing engine valves. It then describes using linear variable differential transformers (LVDTs) to precisely measure cam shaft angle. The document outlines the system components, including an LVDT sensor, servo motor, electromagnetic valves and other sensors. It discusses designing the system to measure cam shaft angle to within 0.5mm precision and control product quality. Finally, it evaluates mounting designs and material choices to optimize the monitoring system.
This document appears to be contact information for a business located at Velazquez 15, 2D in Madrid, Spain. The phone number is 91.737.32.84 and the website is www.gestionemocional.com. The limited information provided suggests this may be for an emotional management or counseling service.
This document appears to be contact information for a business located at Velazquez 15, 2D in Madrid, Spain. The phone number is 91.737.32.84 and the website is www.gestionemocional.com.
This document showcases 8 different photos taken by various photographers and suggests that the reader may be inspired to create their own presentation using those types of photos on the platform Haiku Deck on SlideShare. It ends by prompting the reader to get started making their own presentation.
This document provides a summary of qualifications and work experience for Muhammad Ahmad Yaseen, a mechanical engineer. It outlines his experience in various roles related to oil and gas pipelines, plants, and pressure reduction stations over the past 15+ years, including as a construction manager, site/construction manager, senior mechanical and piping engineer, and more. It also lists his education qualifications and skills in areas like hydraulic systems, pneumatic circuits, computer programming, and standards like API and ASME.
This document proposes laying fiber optic cables along existing railway tracks in Sudan to connect remote cities and towns. Some key points:
- Fiber optic cables have advantages over regular cables for data transmission, but are expensive to install. Railway tracks provide cleared linear routes to lay cables cost-effectively.
- The Sudan railway network spans over 5,898 km and connects many remote locations, providing an opportunity to establish a fiber optic network along the tracks.
- The proposal suggests laying fiber optic cables in pipes buried between or beside railway tracks for new and existing tracks. This would provide a secure and inexpensive way to connect remote areas of Sudan.
IRJET- Analytic Evaluation of the Head Injury Criterion (HIC) within the Fram...IRJET Journal
This document presents an analytic evaluation of the Head Injury Criterion (HIC) within the framework of constrained optimization theory. The HIC is a weighted impulse function used to predict the probability of closed head injury based on measured head acceleration. Previous work analyzed the unclipped HIC function, but the clipped HIC formulation used in practice limits the evaluation window duration. The author develops analytic relationships for determining the window initiation and termination points to maximize the clipped HIC function. Example applications illustrate the general solutions for when head acceleration is defined by a single function or composite functions over the evaluation domain.
COVARIANCE ESTIMATION AND RELATED PROBLEMS IN PORTFOLIO OPTIMICruzIbarra161
COVARIANCE ESTIMATION AND RELATED PROBLEMS IN PORTFOLIO OPTIMIZATION
Ilya Pollak
Purdue University
School of Electrical and Computer Engineering
West Lafayette, IN 47907
USA
ABSTRACT
This overview paper reviews covariance estimation problems and re-
lated issues arising in the context of portfolio optimization. Given
several assets, a portfolio optimizer seeks to allocate a fixed amount
of capital among these assets so as to optimize some cost function.
For example, the classical Markowitz portfolio optimization frame-
work defines portfolio risk as the variance of the portfolio return,
and seeks an allocation which minimizes the risk subject to a target
expected return. If the mean return vector and the return covariance
matrix for the underlying assets are known, the Markowitz problem
has a closed-form solution.
In practice, however, the expected returns and the covariance
matrix of the returns are unknown and are therefore estimated from
historical data. This introduces several problems which render the
Markowitz theory impracticable in real portfolio management appli-
cations. This paper discusses these problems and reviews some of
the existing literature on methods for addressing them.
Index Terms— Covariance, estimation, portfolio, market, fi-
nance, Markowitz
1. INTRODUCTION
The return of a security between trading day t1 and trading day t2
is defined as the change in the closing price over this time period,
divided by the closing price on day t1. For example, the daily (i.e.,
one-day) return on trading day t is defined as (p(t)−p(t−1))/p(t−
1) where p(t) is the closing price on day t and p(t−1) is the closing
price on the previous trading day. Note that if t is a Monday or the
day after a holiday, the previous trading day will not be the same as
the previous calendar day.
Suppose an investment is made into N assets whose return vec-
tor is R, modeled as a random vector with expected return µ =
E[R] and covariance matrix Λ = E[(R − µ)(R − µ)T ]. In other
words, R = (R(1), . . . , R(N))T where R(n) is the return of the n-th
asset. It is assumed throughout the paper that the covariance matrix
Λ is invertible. This assumption is realistic, since it is quite unusual
in practice to have a set of assets whose linear combination has re-
turns exactly equal to zero. Even if an investment universe contained
such a set, the number of assets in the universe could be reduced to
eliminate the linear dependence and make the covariance matrix in-
vertible.
Out of these N assets, a portfolio is formed with allocation
weights w = (w(1), . . . , w(N))T . The n-th weight is defined as the
amount invested into the n-th asset, as a fraction of the overall invest-
ment into the portfolio: if the overall investment into the portfolio is
$D, and $D(n) is invested into the n-th asset, then w(n) = D(n)/D.
Therefore, by definition, the weights sum to one:
w
T
1 = 1, (1)
where 1 is an N -vector of ones. Note that some of the weights may
be negative, ...
This document summarizes key concepts related to martingales and stopping times in probability theory. It begins with an introduction to martingales and their properties. A martingale is a stochastic process where the expected future value is equal to the present value. The document then discusses stopping times, which are random variables that determine when a stochastic process will stop. A key theorem discussed is Doob's Optional Stopping Theorem, which establishes conditions under which a martingale that is stopped at a random time remains a martingale. The document concludes by defining stochastic processes and stopped processes, which are processes that are stopped at a specified stopping time.
A Stochastic Model by the Fourier Transform of Pde for the Glp - 1IJERA Editor
The peptide hormone glucagon like peptide GLP -1 has most important actions resulting in glucose lowering
along with weight loss in patients with type 2 diabetes. As a peptide hormone, GLP -1 has to be administered by
injection. A few small-molecule agonists to peptide hormone receptors have been described.
Here we develop a model for credit risk based on a model with stochastic eigen values called principal
component stochastic covariance.
equity, implied, and local volatilitiesIlya Gikhman
This document discusses connections between stock volatility, implied volatility, and local volatility in option pricing models. It provides an overview of the Black-Scholes pricing model, which assumes stock volatility is known. However, implied volatility estimated from market option prices does not match the true stock volatility. The local volatility model develops implied volatility as a function of underlying variables to better match market prices, without relying on an assumed stock process.
Last my paper equity, implied, and local volatilitiesIlya Gikhman
In this paper we present a critical point on connections between stock volatility, implied
volatility, and local volatility. The essence of the Black Scholes pricing model is based on assumption
that option piece is formed by no arbitrage portfolio. Such assumption effects the change of the real
underlying stock by its risk neutral counterpart. Market practice shows even more. The volatility of the
underlying should be also changed. Such practice calls for implied volatility. Underlying with implied
volatility is specific for each option. The local volatility development presents the value of implied
volatility.
This document summarizes a research paper that examines the optimal investment, consumption, and life insurance selection problem for a wage earner. The problem is modeled using a financial market with one risk-free asset and one risky jump-diffusion asset, along with an insurance market composed of multiple life insurance companies. The goal is to maximize the wage earner's expected utility from consumption during life, wealth at retirement or death, by choosing an optimal investment, consumption, and insurance strategy. The authors use dynamic programming to characterize the optimal solution and prove existence and uniqueness of a solution to the associated nonlinear Hamilton-Jacobi-Bellman equation.
The price density function, a tool for measuring investment risk,volatility a...Tinashe Mangoro
In this paper I derive a density function for describing the distribution of an investment;s price.From that function I then go on to show how we can use it to calculate volatility, interest rate averages and also hedging risk againist interest rate movements.
Expanding further the universe of exotic options closed pricing formulas in t...caplogic-ltd
The document proposes a pricing method for exotic options like Best Of and Rainbow options that results in a closed-form pricing formula. The method assumes returns follow a Brownian motion under the Black-Scholes model. The pricing formula is a linear combination of the current market prices of the underlying assets multiplied by a probability expressed in the risk-neutral measure. This probability can be evaluated using the cumulative function of the normal multivariate distribution if the payoff is defined as a comparison of asset prices at different times. The paper provides proofs and discusses how to evaluate the required probability.
This document describes an uncertain volatility model for pricing equity option trading strategies when the volatilities are uncertain. It uses the Black-Scholes Barenblatt equation developed by Avellaneda et al. to derive price bounds. The model is implemented in C++ using recombining trinomial trees to discretize the asset prices over time and space. The code computes the upper and lower price bounds by solving the Black-Scholes Barenblatt PDE using numerical techniques, with the volatility set based on the sign of the option gamma.
1) The document outlines drawbacks in the Black-Scholes option pricing theory, including mathematical errors in its derivations. Specifically, the assumption that a hedging portfolio eliminates risk is incorrect as a third term was omitted from the change in the portfolio value.
2) It also discusses issues with the local volatility adjustment concept, noting that transforming the constant diffusion coefficient to a local volatility surface does not actually explain the smile effect observed in options data.
3) While local volatility aims to match implied volatilities observed in the market, the theory suggests the local volatility surface should actually be equal to the original constant diffusion coefficient.
Market risk and liquidity of the risky bondsIlya Gikhman
This document discusses modeling the effect of liquidity on risky bond pricing using a reduced form approach. It begins by presenting a simplified model where default can only occur at maturity. It then extends this to a discrete time approximation for default occurrence. The key concepts discussed are:
- Defining bid and ask prices for risk-free and corporate bonds to model liquidity spread
- Using a single price framework and extending it to account for liquidity spread
- Modeling the corporate bond price as a random variable based on default/no default scenarios
- Defining market and spot prices of bonds and the associated market risks for buyers and sellers
- Estimating the recovery rate and default probability given observations of spot prices over time
Slides FIS5.pdfOutline1 Fixed Income DerivativesThe .docxbudabrooks46239
Slides FIS5.pdf
Outline
1 Fixed Income Derivatives
The Forward-Risk Adjusted Measure
2 Example
Dr Lara Cathcart () 2015 2 / 28
The problem
Consider a fixed-income derivative with a single payo↵ at time T which depends
on the term-structure. In particular, we will look at options on zero-coupon
bonds. For a call option on a zero-coupon bond maturing at time T
1
, the time T
payo↵ and hence the value of the derivative is given by
V
T
= max(P(T, T
1
) � K, 0) (1)
Dr Lara Cathcart () 2015 3 / 28
The problem
By the no-arbitrage theorem, the price today (t=0)is
V
0
= EQ
0
[e�
R
T
0
rsds
V
T
] (2)
where the expectation is taken under the risk-neutral distribution (also called the
Q measure). Thus the price depends on the stochastic process for the short rate
and the contractual specification of the security (i.e how the payo↵ is linked to the
term structure).
Dr Lara Cathcart () 2015 4 / 28
The problem
The price V
0
in equation (2) is given by the expectation of the product of two
dependent random variables, and calculating this expectation is often quite
di�cult. The purpose of this note is presenting a change-of measure technique
which considerably simplifies the evaluation of V
0
.
Dr Lara Cathcart () 2015 5 / 28
The problem
Specifically we are going to calculate V
0
as
V
0
= P(0, T)EQ
T
0
(V
T
) (3)
where QT is a new probability measure (distribution), the so-called forward-risk
adjusted measure. This technique was introduced in the fixed-income literature by
Jamishidian (1991).
Dr Lara Cathcart () 2015 6 / 28
Model setup and notation
Our term-structure is a general one-factor HJM model see Heath, Jarrow and
Morton (1992). Under the Q-measure, forwards rates are governed by
df (t, T) = ��(t, T)�
P
(t, T)dt + �(t, T)dW Q
t
(4)
where
�
P
(t, T) = �
Z
T
t
�(t, u)du (5)
Dr Lara Cathcart () 2015 7 / 28
The problem
Bond prices evolve according to the SDE
dP(t, T) = r
t
P(t, T)dt + �
P
(t, T)P(t, T)dW Q
t
(6)
so �
P
(t, T) is the time t volatility of the zero maturing at time T.
Dr Lara Cathcart () 2015 8 / 28
The Forward-Risk Adjusted Measure
The price of derivative security follows the SDE
dV
t
= r
t
V
t
dt + �
V
(t)V
t
dW
Q
t
(7)
This means that, under the risk-neutral distribution, the expected rate of return
equals the short rate (just like any other security), and the return volatility is
�
V
(t). So far neither V
t
nor �
V
(t) are known, but this is not essential for the
following arguments. In fact, the only thing that matters is that the process has
the form (7) since this facilitates pricing by the forward-risk adjusted measure.
Dr Lara Cathcart () 2015 9 / 28
The Forward-Risk Adjusted Measure
We begin by defining the deflated price process
F
t
⌘ V
t
/P(t, T) (8)
for t 2 [0, T]. We can interpret F
t
as the price of V
t
in units of the T-maturity
bond price (i.e., as a relative price).
Dr Lara Cathcart () 2015 10 / 28
The Forward-Ri.
1) The document discusses pricing models for derivatives such as options and interest rate swaps. It introduces concepts such as local volatility, which models implied volatility as a function of strike price and time to maturity.
2) Black-Scholes pricing is based on the assumption of a perfect hedging strategy, but the document notes this is formally incorrect as the hedging portfolio defined does not satisfy the required equations.
3) Local volatility presents the option price as a function of strike and time to maturity, with the diffusion coefficient estimated from option price data, whereas Black-Scholes models the price as a function of the underlying and time, with volatility as an input.
A Stochastic Limit Approach To The SAT ProblemValerie Felton
This document proposes using quantum adaptive stochastic systems to solve NP-complete problems like SAT in polynomial time. It summarizes the SAT problem, discusses existing quantum algorithms for it, and introduces the concept of using channels instead of just unitary operators to model more realistic quantum computations. It argues that combining a quantum SAT algorithm with a stochastic limit approach using channels could provide a method to distinguish computation results that existing algorithms cannot, potentially solving NP-complete problems efficiently.
This document discusses issues with the derivation of the Black-Scholes equation and option pricing model. It highlights two popular derivations of the Black-Scholes equation, noting ambiguities in the original derivation. It proposes defining the hedged portfolio over a variable time interval to address these ambiguities. The document also notes drawbacks of the Black-Scholes price, including that it only guarantees a risk-free return over an infinitesimal time period and does not reflect market prices which may incorporate other strategies.
In this paper we show how the ambiguities in derivation of the BSE can be eliminated.
We pay attention to option as a hedging instrument and present definition of the option price based on market risk weighting. In such approach we define random market price for each market scenario. The spot price then is interpreted as a one that reflect balance between profit-loss expectations of the market participants
This document discusses testing the normality assumption of log-returns for stock prices. It summarizes that the Black-Scholes model, widely used in pricing derivatives, assumes log-returns are normally distributed. The author tests this assumption on over 1000 company stock prices from the Nasdaq composite index using Kolmogorov-Smirnov, Shapiro-Wilk, and Anderson-Darling goodness-of-fit tests for normality with daily, weekly, and monthly price data from 2000-2011.
CVA In Presence Of Wrong Way Risk and Early Exercise - Chiara Annicchiarico, ...Michele Beretta
We will show how to calibrate the main parameter of the model and how we have used it in order to evaluate the CVA and the CVAW of a one derivative portfolio with the possibility of early exercise.
This document proposes methods for generating electricity from speed breakers. It discusses 5 classifications of speed breaker power generators that use different mechanisms: 1) a chain drive mechanism, 2) a rack and pinion system, 3) direct use of the load through a reciprocating device, 4) a translator and stator topology, and 5) a pressure lever mechanism. The document also outlines the advantages of using speed breakers for power generation such as low cost and maintenance and being a renewable source. Some challenges are also noted such as selecting a suitable generator and dealing with rain damage.
Cassava waste water was used as an admixture to replace distilled water in ratios of 5%, 10%, 15%, and 20% for producing sandcrete blocks. 60 sandcrete blocks of size 450mm x 150mm x 225mm were produced with different admixture ratios and a control with 0% admixture. The blocks were cured for 7, 14, 21, and 28 days and then tested for moisture content, specific gravity, water absorption, and compressive strength. Test results showed that blocks with 20% cassava waste water admixture met the minimum compressive strength requirement of 3.30 N/mm2 set by Nigerian standards, indicating the potential of cassava waste water to improve sandcrete block quality and
The document presents a theorem on random fixed points in metric spaces. It begins with introductions to fixed point theory, random fixed point theory, and relevant definitions. The main result is Theorem 3.1, which proves that if a self-mapping E on a complete metric space X satisfies certain contraction conditions involving parameters between 0 and 1, then E has a unique fixed point. The proof constructs a Cauchy sequence that converges to the unique fixed point. The document contributes to the study of random equations and random fixed point theory, which has applications in nonlinear analysis, probability theory, and other fields.
1. The document discusses applying multi-curve reconstruction technology to seismic inversion to improve accuracy and reliability. It focuses on reconstructing SP and RMN curves from well logs that are affected by various distortions.
2. The process of reconstructing the curves involves removing baseline drift, standardizing values, applying linear filtering, and fitting the curves. This removes interference and retains valid lithological information.
3. Reconstructing high quality curves improves the resolution and credibility of seismic inversion results. The method is shown to effectively predict sand distribution with little error.
This document compares the performance of a Minimum-Mean-Square-Error (MMSE) adaptive receiver and a conventional Rake receiver for receiving Ultra-Wideband (UWB) signals over a multipath fading channel. It first describes the UWB pulse shapes and channel model used, including the 6th derivative of the Gaussian pulse and the IEEE 802.15.3a modified Saleh-Valenzuela channel model. It then discusses the Direct-Sequence and Time-Hopping transmission and multiple access schemes for UWB. The document presents the receiver structures for the MMSE adaptive receiver and Rake receiver and compares their performance using MATLAB simulations.
This document summarizes a study on establishing logging interpretation models for reservoir parameters like porosity, permeability, oil saturation, and gas saturation in the Gaotaizi Reservoir of the L Oilfield. Models were developed using core data from 4 wells and include:
1) A porosity model relating acoustic travel time to porosity with an error of 0.92%
2) A permeability model relating permeability to porosity with an error of 0.31%
3) An oil saturation model using resistivity data with empirically determined parameters
4) A method to determine original gas saturation from mercury injection data.
Application of the models improved interpretation precision and allowed recalculation of oil and gas reserves for the
This document discusses predicting spam videos on social media platforms using machine learning. It proposes using attributes like number of likes, comments, and view count to classify videos as spam or not spam. A predictive algorithm is developed that uses threshold values for attributes and natural language processing of comments to classify videos. Testing of the algorithm on a dataset achieved a spam prediction precision of 93.6%. Issues with small datasets decreasing accuracy are also discussed, along with continuing work to address this issue.
1) The study experimentally evaluated the compatibility relationship between polymer solutions and oil layers through core flooding tests with different permeability cores.
2) The results showed that injection rate decreased with increasing polymer concentration and molecular weight, and increased with permeability.
3) Based on the results, boundaries for injection capability were established and a compatibility chart was proposed to guide polymer solution selection for different sedimentary microfacies in the field based on permeability and pore size.
1. The document discusses the identification of lithologic traps in the D3 Member of the Gaonan Region using seismic attribute analysis, acoustic impedance inversion, and sedimentary microfacies analysis.
2. Several lithologic traps were identified in the I and II oil groups of the D3 Member, with the largest trap located between wells G46 and G146X1 covering an area of about 2.35 km2.
3. Impedance inversion, seismic attribute analysis, and sedimentary microfacies characterization using 3D seismic data helped determine the location and development of effective lithologic traps in the thin sandstone-shale interbeds of the target stratum.
This document examines using coal ash as a partial replacement for cement in concrete. Coal ash was substituted for cement at rates of 5%, 10%, and 15% by weight. Testing found that concrete with a 5% substitution of coal ash exhibited only a slight decrease in compressive strength of 2% at 28 days while gaining improved workability. Higher substitution rates of 10% and 15% coal ash led to greater decreases in compressive and tensile strength. The study concludes that a 5% substitution of coal ash for cement provides benefits of reduced cost and improved workability with minimal strength impacts, representing an effective use of a waste material that addresses sustainability.
Accounting professional judgment involves handling accounting events and compiling financial reports according to regulations and standards. However, professional judgment is sometimes manipulated to distort accounting information. The document discusses three ways manipulation occurs: 1) abandoning accounting principles, 2) optional changes to accounting policies, and 3) abuse of accounting estimates. The causes of manipulation include distorted motivations from corporate governance issues and catering to various stakeholder interests. Strengthening supervision and improving the accounting system are proposed to manage manipulation of professional judgment.
The document discusses research on the distribution of oil and water in the eastern block of the Chao202-2 area in China. It establishes standards for identifying oil, poor oil, dry, and water layers using well logging data. Analysis shows structural reservoirs are dominant and fault and sand body configuration control oil-water distribution. Oil-water distribution varies between fault blocks from "up oil, bottom water" to "up water, bottom oil" depending on structure and sand body development.
The document describes an intelligent fault diagnosis system for reciprocating pumps that uses pressure and flow signals as inputs. It consists of hardware for data acquisition and a software system for signal processing, feature extraction, and fault diagnosis using wavelet neural networks. The system was able to accurately diagnose three main fault types - seal ring faults, valve damage, and spring faults - based on differences observed in the pressure curves. Testing on over 12 samples of each fault type achieved a correct diagnosis rate of over 94%. The system provides a fast and effective means of remotely monitoring reciprocating pumps and identifying faults.
This document discusses the application of meta-learning algorithms in banking sector data mining for fraud detection. It proposes using Classification and Regression Tree (CART), AdaBoost, LogitBoost, Bagging and Dagging algorithms for classification of banking transaction data. The experimental results show that Bagging algorithm has the best performance with the lowest misclassification rate, making it effective for banking fraud detection through data mining. Data mining can help banks detect patterns for applications like credit scoring, payment default prediction, fraud detection and risk management by analyzing customer transaction history and loan details.
This document presents a numerical solution for unsteady heat and mass transfer flow past an infinite vertical plate with variable thermal conductivity, taking into account Dufour number and heat source effects. The governing equations are non-linear and coupled, and were solved numerically using an implicit finite difference scheme. Various parameters, including Dufour number and heat source, were found to influence the velocity, temperature, and concentration profiles. Skin friction, Nusselt number, and Sherwood number were also calculated.
The document discusses methods for obtaining a background image using depth information from a depth camera to more accurately extract foreground objects. It finds that accumulating depth images and taking the median value at each pixel provides the most accurate background image. The accuracy of three methods - average, median, and mode - are evaluated using simulated depth data of a captured plane. The median method provides the best results, followed by average, while mode performs worst. More accumulated images provide a more accurate background image across all methods.
This document presents a mathematical model for determining the minimum overtaking sight distance (OSDm) required for an ascending vehicle to safely pass another slower vehicle on a single lane highway with an incline. It defines sight distance, stopping sight distance, perception-reaction time and derives equations to calculate the reaction distance (d1), overtaking distance (d2), vehicle travel distance during overtaking (d3), and total minimum OSDm based on vehicle characteristics, road geometry, and coefficients of friction. The safe overtaking zone is defined as 3 times the minimum OSDm. The model accounts for effects of slope angle and aims to satisfy laws of mechanics for overtaking maneuvers on inclined two-way single lane highways.
This document discusses a novel technique for better analysis of ice properties using Kalman filtering. It summarizes previous research on sea ice segmentation using SAR imagery and dual polarization techniques. It proposes using an automated SAR algorithm along with Kalman filtering to more accurately detect sea ice properties from RADARSAT1 and RADARSAT2 imagery data. The document reviews techniques for image segmentation, dual polarization, PMA detection, and related work on sea ice classification using statistical ice properties, edge preserving region models, and object extraction methods.
This document summarizes a study on the bioaccumulation of heavy metals in bass fish (Morone Saxatilis) caught at Rodoni Cape in the Adriatic Sea in Albania. Samples of bass fish were collected from five sites and analyzed for mercury, lead, and cadmium levels in their muscles. The concentrations of heavy metals varied between fish and sites but were below international limits for human consumption. While the fish were found to be safe for eating, the study recommends continuous monitoring of metal levels in fish from the area due to various factors that can influence metal uptake over time.
This document discusses optimal maintenance policies for repairable systems with linearly increasing hazard rates. It considers a system with a constant repair rate and predetermined availability requirement. There are two maintenance policies: corrective maintenance only, and preventive maintenance at set time intervals. The goal is to determine the preventive maintenance interval that guarantees the availability requirement at minimum cost. Equations are developed to calculate the availability under each policy and the optimal preventive maintenance interval based on both availability and cost. A numerical example is provided to demonstrate the decision process in determining the optimal policy.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
20240605 QFM017 Machine Intelligence Reading List May 2024
A05220108
1. IOSR Journal of Engineering (IOSRJEN) www.iosrjen.org
ISSN (e): 2250-3021, ISSN (p): 2278-8719
Vol. 05, Issue 02 (February. 2015), ||V2|| PP 01-08
International organization of Scientific Research 1 | P a g e
Discrete-Time Constrained Portfolio Optimization: Strong
Duality Analysis
Lan Yi*
1.Management School, Jinan University, Guangzhou 510632, China
Abstract: - We study in this paper the strong duality for discrete-time convex constrained portfolio selection
problems when adopting a risk neutral computational approach. In contrast to the continuous-time models, there
is no known result of the existence conditions in discrete-time models to ensure the strong duality. Investigating
the relationship among the primal problem, the Lagrangian dual and the Pliska’s dual, we prove in this paper
that the strong duality can be always guaranteed for constrained convex portfolio optimization problems in
discrete-time models when the constraints are expressed by a set of convex inequalities.
Keywords: Portfolio optimization; incomplete market; utility; investment constraints; duality; martingale
approach.
I. INTRODUCTION
We consider in this paper the issue of strong duality for convex inequality constrained portfolio
selection problems in a discrete financial model. The continuous version of this problem has been investigated
extensively since 1992 and some prominent results have been achieved. Xu and Shreve [3,4] show that convex
duality approach succeeds in solving problems with no-short-selling constraint. Cvitanic and Karatzas [2]
develop convex duality theory for general convex constrained portfolio optimization problems. As Cvitanic and
Karatzas [2] confine admissible policies to be bounded adapted processes that make the wealth process
nonnegative, the utility function used in their model, U(), is defined on (0,+) and satisfies (i)cU'(c) is
nondecreasing on (0,), and (ii) there exist some (0,1) and (1,) such that U'(x)U'(x), x(0,).
Cvitanic and Karatzas [2] introduce then a family of unconstrained problems and build up the corresponding
dual problem. Finally, they prove the strong duality theorem that the optimal solution of the dual problem also
solves the primal problem.
For discrete financial models studied in this paper, we define admissible policies to be general bounded
adapted processes and thus define the objective utility on the entire R. Similar to Cvitanic and Karatzas [2],
Pliska [1] introduces a family of unconstrained problems for a constrained discrete financial model and gives the
strong duality condition under which the optimal solution of the dual problem also solves the primal problem.
However, to our knowledge, there is no known result in the literature on the existence condition such that the
strong duality condition can be ensured to hold in a discrete-time model as the continuous-time model does.
In this paper, we would like to close the gap between continuous-time models and discrete-time models,
and prove that the strong duality condition always holds in discrete-time models when utility function satisfies
cU'(c+)< forcR and <<. We build up in Section 2 a discrete-time financial model, and formulate
the constrained portfolio selection problem mathematically. We discuss in Section 3 the Pliska’s dual problem
and present the strong duality theorem. We derive in Section 4 the main result of this paper: Theory of
guaranteed strong duality. We demonstrate our results via an illustrative example in Section 5 before we
conclude our paper in Section 6.
II. MATHEMATICAL FORMULATION
We consider a financial market, consisting of n risky assets and one risk free asset, in which investors
make their investment decisions at multiple time instants, t=0,1,,T1. Let( , ,{ } , )t t P F F be the filtrated
probability space, where 1: { , , }K is the sample space with K finite samples. Denote the stochastic
process of the risky securities’ returns as 0, , 1{ }t t T μ μ , where ( (1), , ( ))'t t t n μ is a random
vector, and the bond return process as 0, , 1{ }t t Tr r , where tr is a deterministic scalar. Denote
0, , 1{ }t t TR R as the extra return process with ( (1), ( ))'t t tR R R n and ( ) ( )t t tR i i r .
Furthermore, we introduce two assumptions on the financial market, which guarantee the completeness of
the market.
2. Discrete-Time Constrained Portfolio Optimization: Strong Duality Analysis
International organization of Scientific Research 2 | P a g e
Assumption 2.1.(i) At any time t, there exist : ( 1)t
tm n elements
1
, , tm
t tA A , such that
1 tm
t tA A , ,i j
t tA A i j , and
1
( , , )tm
t t tA A F ; (ii) ti m ,
( 1)( 1)
1
i n j i
t tA A
for j=1,,n+1; (iii) The assets’ return matrix
( 1)( 1) 1 ( 1)
( 1) ( 1)
( ) ( )t
t t
i n i n
t t t n n
r r
A A
μ μ
is full rank for any
i
t tA F .
Assumption 2.2.The financial market is arbitrage-free.
An investor with initial wealth v would like to invest her wealth in the market. Denote her
self-financing trading strategies as 0,1, , 1{ }t t T , where ( (1), , ( ))'t t t n with ( )t i being
the dollar amount invested in ith risky security at time t. Let tV be the portfolio value at time t. The dollar
amount invested in the bond at time t is then
1
( )
n
t t
i
V i
. Therefore, the wealth process satisfies
1 ' .t t t t tV V r R (1)
We assume in this paper that, when there is no constraint on trading strategies, the market governed by the
stochastic difference equation described above,
1
0
' ;
, 0,1, , 1;
,
t t t t t
n
t
V V r R
t T
V v
satisfies both Assumptions 2.1 and 2.2, thus being a complete market.
The subject we study in this paper is a constrained portfolio selection problem. Under the condition that
t is constrained in a convex set tK , the investor pursues her investment by maximizing her expected utility of
the terminal wealth, ( ):U , where U(,) is assumed to be differentiable, strictly increasing and
concave for each . For example, { ; ( ) 0, 1, , }n
t t t i i n K when short selling is
prohibited. In summary, the mathematical model of the investor’s constrained portfolio selection problem is
posted as follows,
1
0
[ ( )]
. . ( )' ;
( )
, 0,1, , 1;
.
T
t t t t t
n
t t
max E U V
s t V V r R
P
t T
V v
K
If tK is a subset of
n
, problem (P) is a portfolio selection problem in an incomplete market, as some
contingent claims can not be hedged by any admissible portfolios due to the constraints.
III. RISK NEUTRAL COMPUTATIONAL APPROACH
Following Pliska [1], we define the support function of tK as
( ) ( )' .t tt t tsup K
The effective domain of ( )t is then given by
{ ; ( ) }.n
t t t K
3. Discrete-Time Constrained Portfolio Optimization: Strong Duality Analysis
International organization of Scientific Research 3 | P a g e
We introduce the predictable stochastic process : { ; 0,1, , 1, }t t tt T K . Let D be the set of
all such processes of . For each D, we construct an auxiliary market M with the following modified
returns,
tr
( )
,t
t
t
r
V
( )t i
( )
( ) ( ), 1, , ,t
t t
t
i i i n
V
1tV
( )' ,t t t tV r R
where ( (1), , ( ))'t t tR R R n
with ( ) ( )t t tR i i r
= ( ) ( )t tR i i .
When 0tV
, we let t tr r
and ( ) ( ) ( )t t ti i i
. Notice that t
tV
is the proportional trading
strategy adopted in Pliska [1].
The first step in the risk neutral computational approach (see [1]) is to embed the primal constrained
portfolio selection problem (P) into a family of unconstrained portfolio selection problems in M ,
1
0
[ ( )]
. . ( )' ;
( )
; 0,1, , 1;
.
T
t t t t t
n
t
max E U V
s t V V r R
P
t T
V v
Note that Assumption 2.1 still holds in the auxiliary market M , as the return matrix in the auxiliary market
M is obtained by performing some elementary transformations on the return matrix of the original market,
due to the predictability of process .
Thus, problem ( )P
for given D can be still efficiently solved by using the martingale-like
approach in [1].
It is easy to see that, for t , 0,1, , 1t t T K ,
TV
1 1 1 1 1 1 1( )' [ ( ) ( )' ]T T T T T T TV r R
1 1 1 1( )'T T T TV r R
2 2 2 2 1 1 1[ ( )' ] ( )'T T T T T T TV r R r R
1 11
00 1
( )'
T TT
t t t i
tt i t
v r R r
.TV
Due to the increasing property of the utility function, we can get the following weak duality.
Proposition 3.1. Weak dualityLet J(v) be the optimal value of primal problem (P) and ( )J v
be the optimal
value of problem ( )P
. Then
( ) ( ), .J v J v
D
As for any D , ( )J v
offers an upper bound for J(v), the second step in the risk neutral
computational approach [1] is to find the tightest upper bound by solving the following dual problem,
*
( ) ( ),DD argmin J v
such that, hopefully, the optimal solution to the unconstrained problem in the market *
M will turn out to be
the optimal solution to the constrained problem in the primal constrained market and the corresponding optimal
4. Discrete-Time Constrained Portfolio Optimization: Strong Duality Analysis
International organization of Scientific Research 4 | P a g e
objective values will coincide, i.e.,
*
( )J v
= J(v). We call the dual problem (D) in this paper as the Pliska’s
dual of (P).
Proposition 3.2. Strong Duality (see [1])Suppose that for some ˆ D, the optimal trading strategy of
ˆ
( )P
, ˆ , satisfies
(a) ˆt t K ,
(b) ˆ ˆˆ( ) ' 0t t t .
Then ˆ is optimal for the primal constrained portfolio selection problem (P), and
ˆ
( ) ( ) ( )J v J v J v
for all D.
A crucial question is the existence guarantee of such a ˆ for achieving strong duality. Pliska states the
following in [1]: The obvious candidate for such a ˆ is
*
, the solution of the dual problem (D). After
computing
*
, you then check whether
*
, the optimal trading strategy for
*
( )P
, satisfies conditions (a)
and (b) in Proposition 3.2. If both conditions are satisfied, then
*
will be optimal for the primal constrained
portfolio selection problem (P). However, as emphasized in [1], there is no known result to guarantee such an
existence.
The main purpose of this paper is to present a guaranteed strong duality result when convex set tK is
specified by a set of convex inequalities.
IV. GUARANTEED STRONG DUALITY
Let us consider problem (P), where feasible convex set tK is specified by a set of convex inequalities,
{ ; ( ) },t t tG b K (2)
where
1 2
: ( , , )'td
t t t tG G G G with
2i
tG being a second order continuous differentiable convex
function, i = 1, …, k, and tb is a td dimensional vector.
As we know, the primal problem (P) can be tackled either as a stochastic control problem, where the trading
strategy at time t, t , is a tF -measurable stochastic random vector, or as a static optimization problem, where
all the realizations of t are considered separately based on our discrete financial model. In the latter case, the
objective function in (P) can be reformulated as follows,
1 11
00 1
( ) ( ( ))' ( ) .
T TT
t t t i
tt i t
P U v r R r
Notice that ( ) ( )i
t t tA if
i
tA , due to the tree structure of the market. The decision vectors are
( )i
t tA for t=0,1,,T1 and 1, ,( 1)t
i n . When we deal with the primal problem (P) as a static one,
we first formulate its Lagrangian dual problem.
Given { ; ( ) }t t tG b K , the Lagrangian dual of problem (P) is given as follows,
( )LD
( 1)1
'
0
( ( )) [ ( ( ))]
t
nT
i i
t t t t t t
t i
min max A b G A
1 11
00 1
( ) ( ( ))' ( ) ,i
T TT
t t t
tt i t
P U v r R r
where 0,1, , 1{ }t t T is a nonnegative adapted process. As problem (P) is convex, there is no duality gap
between problems (P) and ( )LD from the strong duality theorem. Furthermore, a process pair
* *
( , )
satisfying the first order condition,
1 1 11
*
00 1 1
[1 '( ( )' ( ) ) ]i
t
T T TT
i
t t t t i t iA
tt i t i t
E U v r R A r R r
* *
( ( ))' '( ( )) 0,i i
t t t t tA G A (3)
5. Discrete-Time Constrained Portfolio Optimization: Strong Duality Analysis
International organization of Scientific Research 5 | P a g e
* *
( ( ))'[ ( ( ))] 0,i i
t t t t t tA b G A (4)
for 1, ,( 1)t
i n and t=0,1,,T1, where
1
( )'
'( ) ,
( )'
t
t t k
t
G
G
G
solves both the primal and the Lagrangian dual problems.
Theorem 4.1.Assume that the concave utility function further satisfies
cU'(c+)<, (5)
for all c and (,+). For the process pair
* *
( , ) specified in (3) and (4), respectively, let
*
( )i
t tA
* *
( '( ( )))' ( )
,
( )
i i
t t t t t
i
t t
G A A
A
(6)
for 1, ,( 1)t
i n and t=0,1,,T1, where
( ) :i
t tA
1 1 11
? *
00 1 1
[1 '( ( ) ( ) ) ].i
t
T T TT
i
t t t t i iA
tt i t i t
E U v r R A r r
Then
*
t t K ,
*
t tK ,
*
{ }t solves
*
( )P
and
* * *
( ) ( )' 0,t t t (7)
for t=0,1,,T1.
Proof. The conclusion of
*
t t K is due to the strong duality between the primal problem (P) and the
Lagrangiandual ( )LD . The following is clear from (5),
* * *
( ( ))'( '( ( )))' ( )i i i
t t t t t t tA G A A
1 1 11
* *
00 1 1
[1 '( ( )' )( )' ]i
t
T T TT
t t t i t t iA
tt i t i t
E U v r R r R r
. (8)
As
* * * * *
( ) ( ) ( ( ))'( ) 0.5( )' ( )( )i i i i i
t t t t t t t t t t t t t tG G G HG b for some
between t and
*
t , i=1,,k, where ( )i
tHG is the Hessian matrix, we have
*
( )i
t t tG * * *
( ) ( )i i i
t t t t t tb G G * *
0.5( )' ( )( ).i
t t t t tHG
Since
* *
( )'( ( )) 0t t t tb G , we further have
* *
( )' '( )t t t tG * * * *
( )' '( ) 0.5( )'t t t t tG tH * * *
( )' '( ) ,t t tG (9)
where
tH 1
( , ',k
t tH H
i
tH * *
( )' ( )( ) 0.i
t t t t tHG
Therefore,
*
( ( ))i
t tA *
( ( ))' ( )t t
i i
t t t tsup A A K
* *
( ( ))'( '( ( ))' ( ), ( ))
( )t t
i i i i
t t t t t t t t t
i
t t
A G A A A
sup
A
K
,
which implies
*
t t K .
The following is clear from (9),
* * *
( ) ( )'t t t t * * *
{ ( )' } ( )'t t t t t tsup K
6. Discrete-Time Constrained Portfolio Optimization: Strong Duality Analysis
International organization of Scientific Research 6 | P a g e
* * * ? * *
( )' '( ) ( ) ( '( ))'
( )
( )t t
it t t t t t t t
ti
t t t
G G
sup A
A
K
* * * * *
( )' '( ) ( )' '( )
( )t t
t t t t t t t t
i
t t
G G
sup
A
K
0.
Furthermore, we can check that
*
satisfies the following optimality condition of problem
*
( )P
,
( ) :i
t tA
*
*
1
[1 ( )i
t
T
t t iA
i t
E R r
* *
1 11
* *
00 1
'( ( )'( ) )]
T TT
t t t t i
tt i t
U v r R r
0.
Actually, we can derive the following equation by (7),
*
TV
* *
1 11
* *
00 1
( ) '( )
T TT
t t t t i
tt i t
v r R r
1 11
*
00 1
( )' .
T TT
t t t i T
tt i t
v r R r V
Therefore, the following equation can be derived from (3) and (6),
( )i
t tA
* *
1
*
1
[1 '( )( ) ]i
t
T
T t t iA
i t
E U V R r
*
1
*
1
[1 '( )( ) ]i
t
T
T t t iA
i t
E U V R r
* *
1 1
*
1 1
[1 '( ) ] [1 '( ) ]i i
t t
T T
T t i T t iA A
i t i t
E U V R r E U V r
* *1 1
* * * *
t t
1 1
( ' )' ( ' )'
T T
i i
t t t t
i t i ti i
r r
G G
r r
0.
Hence,
*
solves
*
( )P
.
Remark 4.1 If the optimal
* *
( , ) for the Pliska’s dual problem can be derived, and matrix
*
'( )t tG are
nonsingular, then the unique optimal Lagrangian multiplier
*
for the Lagrangian dual problem can be found
as,
*
( )i
t tA *
( '( ( )))'i
t t tG A . (10)
When some matrix
*
'( )t tG are singular or even not square, the optimal Lagrangiancan not be uniquely
determined by the relationship (6).
5. Illustrative Examples
Example 1. We study now a single-period investment example with one risky asset and no short-selling
constraint to illustrate the relationship of the primal problem, the Lagrangian dual problem and the Pliska’s
dual problem. The primal problem ( )P is given as follows,
1
0 1
[ ( )]
( ) . . ;
0.
max E U V
P s t V vr R
While the corresponding Lagrangian dual is
0 ( ):min f max [ ( )] ,E U vr R
the associated Pliska’s dual is
0 ( ):min g max
( )
[ ( ( ) )].E U v r R
v
We can depict the objective functions of the two dual problems in the same figure. Two situations may
occur according to different returns of risky security, R. Figure 1 represents the situation when the optimal
Lagrangian multiplier
*
is bigger than zero. In such a situation,
*
and the corresponding dual parameter
*
may not be equal. However, both the optimal objective values are equal to the optimal objective value of
primal problem.
7. Discrete-Time Constrained Portfolio Optimization: Strong Duality Analysis
International organization of Scientific Research 7 | P a g e
Figure 1: Situation with optimal parameters bigger than 0
Figure 2 illustrates the situation when the optimal Lagrangian multiplier
*
equals to zero. In such a situation,
both
*
and the corresponding parameter
*
are equal to zero, and the optimal objective values of the
lagrangian dual and the Pliska’s dual are still equal to the optimal objective value of the primal problem.
Figure 2: Situation with optimal parameters equal to 0
Example 2. Consider Example 5.11 in [1] which is a two-period problem with short-selling prohibited, the bond
return rate given as 1 1r r and the price process for the single risky asset and the probability measure
specified as follows:
0 ( ) 1( ) P()
1 8/5 9/8 1/4
2 8/5 6/8 1/4
3 4/5 6/4 1/4
4 4/5 3/4 1/4
8. Discrete-Time Constrained Portfolio Optimization: Strong Duality Analysis
International organization of Scientific Research 8 | P a g e
An investor with a log utility function, U(V)=ln(V), enters the financial market with initial wealth v=1. The
optimal dual parameter , trading strategies and wealth process have been derived in [1]. More specifically, the
corresponding optimal process
*
is
* * *
0 1 1 2 1 3 4
1
0, ({ , }) , ({ , }) 0,
16
and the optimal trading strategies are
* * *
0 1 1 2 1 3 4
5 2
, ({ , }) 0, ({ , }) ,
3 3
yielding the corresponding optimal terminal wealth process as
1 1 2 1 3 4
2
({ , }) 2, ({ , }) ,
3
V V
2 1 2 2 2 3 2 4
1
( ) 2, ( ) 2, ( ) 1, ( ) .
2
V V V V
Solving the Lagrangian dual problem,
min max 2 0 0 1 1 2 1 1 2( ( )) ({ , }) ({ , })E ln V 1 3 4 1 3 4({ , }) ({ , }),
gives rise the optimal Lagrangian parameter process,
*
,
* * *
0 1 1 2 1 3 4
1
0, ({ , }) , ({ , }) 0.
64
It can be verified that the optimal trading strategies derived from the Lagrangian dual and Pliska’s dual are
exactly the same, and, furthermore, (6) holds.
V. CONCLUSION
By identifying the relationship between the Lagrangian dual and Pliska’s dual for the constrained
portfolio selection problem, we have derived in this paper a guaranteed strong duality result for a class of
discrete-time constrained convex portfolio selection problems. More specifically, we ensure the existence of an
optimal in the strong duality conditions of [1] to guarantee the success of the risk neutral computational
approach.
The authors declare that there is no conflict of interests.
REFERENCES
[1] S. R. Pliska, Introduction to Mathematical Finance: Discrete Time Models, Blackwell, Oxford, UK, 1997.
[2] J. Cvitanic and I. Karatzas, Convex duality in constrained portfolio optimization, The Annual of Applied
Probability, 2(1992), 767–618.
[3] G. L. Xu and S. E. Shreve, A duality method for optimal consumption and investment under short-selling
prohibition. i. general market coefficients, The Annals of Applied Probability, 2(1992), 87–112.
[4] G. L. Xu and S. E. Shreve, A duality method for optimal consumption and investment under short-selling
prohibition. ii. Constant market coefficients, The Annals of Applied Probability, 2(1992), 314–328.
[5] J. C. Cox and C. F. Huang, Optimal consumption and portfolio policies when asset prices follow a
diffusion process, Journal of Economic Theory, 49(1989), 33–83.
[6] R. T. Rockafellar, Convex Analysis, Princeton, NJ: Princetion University Press, (1970).
[7] J. Harrison and D. Kreps, Martingales and multiperiod securities markets, Journal of Economic Theory,
20(1979), 381–408.
[8] J. Harrison and S. Pliska, Martingales and stochastic integrals in the theory of continuous trading,
Stochastic Process, 11(1981), 215–260.
[9] I. Klein and L. C. G. Rogers, Duality in optimal investment and consumption problems with market
frictions, Mathematical Finance, 17(2007), 225–24.