Using portfolio diversification and risk modeling techniques determine if Insurance portfolio is less volatile than Tech portfolio.
Covers below :
Risk Modeling
Portfolio Diversification
Time Series Forecasting
ARIMA + GARCH + Copula
Recent years have seen the emergence of several static analysis techniques for reasoning about programs. This talk presents several major classes of techniques and tools that implement these techniques. Part of the presentation will be a demonstration of the tools.
Dr. Subash Shankar is an Associate Professor in the Computer Science department at Hunter College, CUNY. Prior to joining CUNY, he received a PhD from the University of Minnesota and was a postdoctoral fellow in the model checking group at Carnegie Mellon University. Dr. Shankar also has over 10 years of industrial experience, mostly in the areas of formal methods and tools for analyzing hardware and software systems.
Optimizing the Performance of an Unpredictable UAV Swarm for Intruder DetectionDaniel H. Stolfi
In this paper we present the parameterisation and optimisation of the CACOC (Chaotic Ant Colony Optimisation for Coverage) mobility model applied to Unmanned Aerial Vehicles (UAV) in order to perform surveillance tasks. The use of unpredictable routes based on the chaotic solutions of a dynamic system as well as pheromone trails improves the area coverage performed by a swarm of UAVs. We propose this new application of CACOC to detect intruders entering an area under surveillance. Having identified several parameters to be optimised with the aim of increasing intruder detection rate, we address the optimisation of this model using a Cooperative Coevolutionary Genetic Algorithm (CCGA). Twelve case studies (120 scenarios in total) have been optimised by performing 30 independent runs (360 in total) of our algorithm. Finally, we tested our proposal in 100 unseen scenarios of each case study (1200 in total) to find out how robust is our proposal against unexpected intruders.
https://doi.org/10.1007/978-3-030-41913-4_4
Recent years have seen the emergence of several static analysis techniques for reasoning about programs. This talk presents several major classes of techniques and tools that implement these techniques. Part of the presentation will be a demonstration of the tools.
Dr. Subash Shankar is an Associate Professor in the Computer Science department at Hunter College, CUNY. Prior to joining CUNY, he received a PhD from the University of Minnesota and was a postdoctoral fellow in the model checking group at Carnegie Mellon University. Dr. Shankar also has over 10 years of industrial experience, mostly in the areas of formal methods and tools for analyzing hardware and software systems.
Optimizing the Performance of an Unpredictable UAV Swarm for Intruder DetectionDaniel H. Stolfi
In this paper we present the parameterisation and optimisation of the CACOC (Chaotic Ant Colony Optimisation for Coverage) mobility model applied to Unmanned Aerial Vehicles (UAV) in order to perform surveillance tasks. The use of unpredictable routes based on the chaotic solutions of a dynamic system as well as pheromone trails improves the area coverage performed by a swarm of UAVs. We propose this new application of CACOC to detect intruders entering an area under surveillance. Having identified several parameters to be optimised with the aim of increasing intruder detection rate, we address the optimisation of this model using a Cooperative Coevolutionary Genetic Algorithm (CCGA). Twelve case studies (120 scenarios in total) have been optimised by performing 30 independent runs (360 in total) of our algorithm. Finally, we tested our proposal in 100 unseen scenarios of each case study (1200 in total) to find out how robust is our proposal against unexpected intruders.
https://doi.org/10.1007/978-3-030-41913-4_4
Fall 2016 Insurance Case Study – Finance 360Loss ControlLoss.docxlmelaine
Fall 2016 Insurance Case Study – Finance 360
Loss Control
Loss control activities of a business focus on finding and implementing solutions to reduce the probability of loss (loss prevention) and/or reduce the actual amount of loss (loss reduction), and therefore reduce the total cost of risk to maximize firm profitability.
Loss control techniques have been widely used in environmental loss prevention, catastrophic loss prevention, and employee-related risk management. Many firms face loss exposures caused by using, storing, and transporting hazardous materials, caustic substances, gasses, acids, etc., and may have unique issues posed by deployment of “greener” vehicle fleets using CNG, LNG, and bio-fuel solutions. Catastrophic risks, such as earthquakes, tornado, hurricanes or big fire, also pose significant threat to the property safety and business continuation for firms. Employee behavior-related risks and product safety are also important concern of corporate risk management.
Lack of effective loss control (such as inadequate systems, inadequate standards, and inadequate compliance with safety standards) may cause significant damage to a firm, such as injury costs, property damage, liability damage, bad press, lower sales, loss of employee morale, so on and so forth, as British Petroleum (BP) or Toyota had suffered in the past.
In this project, select an S&P 500 company and analyze its loss control policies focusing on either environmental loss prevention, or catastrophic loss prevention, or employee-related risk management.
Your analysis should address the following questions in the least:
· How likely the firm is subject to catastrophic losses?
· Has the business suffered losses of the kind in the past?
· What losses could be caused to the firm if a catastrophic event occurs?
A. Direct Property Loss
B. Indirect (or consequential) Property Loss
C. Liability Loss
D. Personnel Loss
E. Crime
F. Other Loss Exposures
· What loss control activities has the firm implemented to reduce the loss?
· E.g. For Property loss control, comment on Facility design and construction, Automatic Sprinkler Protection, Preventative maintenance, Equipment and Process controls and safeguards, Human Element programs, Pre-incident planning and Business continuity planning
· Proactive Safety procedures vs. Reactive Safety & Recovery policies
Requirements
1. Paper length: 8 page minimum, 12 page maximum; 12 point font—double-spaced
2. Paper sections
A. Title Page, including: (1) paper title, (2) course number and name, (3) instructor, (4) your name, and (5) date submitted
B. Executive Summary: This is a 1-2 paragraph overall summary of your paper.
C. Discussion and analysis: Cover all the individual topic areas set out above, each of which should be labeled with an appropriate subject heading.
D. Works Cited: List all secondary sources consulted in preparing this paper.
E. Attachments (if any). You may append any relevant attachment to ...
Strategic Sourcing Quantitative Decision-Making and Analytics Mo.docxcpatriciarpatricia
Strategic Sourcing
Quantitative Decision-Making and Analytics Module 1
Comprehensive Assignment (2/7)
Please submit along with your Excel file.
If you have forgotten the name of Excel formulas, please see the lecture note I distributed on 2/5.
1. What is the most likely outcome from BackOrderData case? How would incorporate the result into you inventory (purchasing/preparation) decision? Please offer a comparison before and after your decision is augmented through the result.
2. Please normalize following data first. (Use Standardize() formula)
a. X = 34, Mean = 45, STDev.P = 6
b. X = 678, Mean = 890, STDev.S = 60, n = 36, degree of freem = 35
3. What does the normalized value tell us (in question 2) about the probability of being smaller than the value?(hint: see visualization below.)
a. Hint: =Norm.S.Dist(NormalizedValue of a, True)
b. Hint: = T.Dist(NormalizedValue of b, df = 35, True)
4. Please review SafetyStock data. Each day, the company TTM receives same quantity of shipment from its supplier. However, every day, there will be uncertain number of items shipped DOA (dead on arrival, i.e., arrived damaged.) To prevent further loss occurring from the sales, TTM plans to keep some safety inventories in its warehouse. The data file is the historical record of DOA quantity during the past 13 months. Let us consider following questions.
1) Please find out probability of following number of DOA. (hint: PivotTable will help you save time.)
DOA Scenario
Probability of Number of No-show
0
1
2
3
4
5
2) Calculate the expected value (or the most likely DOA quantity in long-run). (hint: need to use SUMPRODUCT formula.)
3) Please use the expected value as the safety stock quantity in the same data. See how many losses in sales you could have prevented. (hint: may need to use IF formula.
IF((safety stock - DOA) > 0, DOA, safety stock)) since it is a different context, the logic is a little different than the class demo.)
Total Loss in Sales Due to DOA = 693
Total Prevented Loss when Safety Stock is Used =
5. Please open DistanceTraveled data and answer following questions.
1) (BONUS QUESTION) Please collect three group of random samples (each with size of 30) from the DistanceTraveled variable. What is the average of each sample? What is the average of all three averages? (hint: for sampling, the following data analytics module needs to be used. For further assistance, please see https://www.youtube.com/watch?v=5XrJcFmbpWI 1 min. 32 sec. You will need to use random sampling.)
Average of first sample of 30 =
Average of second sample of 30 =
Average of third sample of 30 =
Average of all three averages above =
2) Calculate the actual average of DistanceTraveled variable (for all 365 observations). Then compare it with the result in #(1). Which result in #5 is in the closest range with the actual mean? Intuitively, which one should be the closest one? Please share your thoughts. Any thoughts/reasonings is .
Explanations to the article on Copy-PastePVS-Studio
Many readers liked my article "Consequences of using the Copy-Paste method in C++ programming and how to deal with it" [1]. Scott Meyers [2] noticed it too and asked me how static analysis proper helped us to detect the errors described in the article.
The objective of this analysis is to quantify the factors that impact the landing distance of a commercial flight and built a linear regression model to predict the risk of landing overrun.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
More Related Content
Similar to Risk modeling prortfolio diversification 4.0
Fall 2016 Insurance Case Study – Finance 360Loss ControlLoss.docxlmelaine
Fall 2016 Insurance Case Study – Finance 360
Loss Control
Loss control activities of a business focus on finding and implementing solutions to reduce the probability of loss (loss prevention) and/or reduce the actual amount of loss (loss reduction), and therefore reduce the total cost of risk to maximize firm profitability.
Loss control techniques have been widely used in environmental loss prevention, catastrophic loss prevention, and employee-related risk management. Many firms face loss exposures caused by using, storing, and transporting hazardous materials, caustic substances, gasses, acids, etc., and may have unique issues posed by deployment of “greener” vehicle fleets using CNG, LNG, and bio-fuel solutions. Catastrophic risks, such as earthquakes, tornado, hurricanes or big fire, also pose significant threat to the property safety and business continuation for firms. Employee behavior-related risks and product safety are also important concern of corporate risk management.
Lack of effective loss control (such as inadequate systems, inadequate standards, and inadequate compliance with safety standards) may cause significant damage to a firm, such as injury costs, property damage, liability damage, bad press, lower sales, loss of employee morale, so on and so forth, as British Petroleum (BP) or Toyota had suffered in the past.
In this project, select an S&P 500 company and analyze its loss control policies focusing on either environmental loss prevention, or catastrophic loss prevention, or employee-related risk management.
Your analysis should address the following questions in the least:
· How likely the firm is subject to catastrophic losses?
· Has the business suffered losses of the kind in the past?
· What losses could be caused to the firm if a catastrophic event occurs?
A. Direct Property Loss
B. Indirect (or consequential) Property Loss
C. Liability Loss
D. Personnel Loss
E. Crime
F. Other Loss Exposures
· What loss control activities has the firm implemented to reduce the loss?
· E.g. For Property loss control, comment on Facility design and construction, Automatic Sprinkler Protection, Preventative maintenance, Equipment and Process controls and safeguards, Human Element programs, Pre-incident planning and Business continuity planning
· Proactive Safety procedures vs. Reactive Safety & Recovery policies
Requirements
1. Paper length: 8 page minimum, 12 page maximum; 12 point font—double-spaced
2. Paper sections
A. Title Page, including: (1) paper title, (2) course number and name, (3) instructor, (4) your name, and (5) date submitted
B. Executive Summary: This is a 1-2 paragraph overall summary of your paper.
C. Discussion and analysis: Cover all the individual topic areas set out above, each of which should be labeled with an appropriate subject heading.
D. Works Cited: List all secondary sources consulted in preparing this paper.
E. Attachments (if any). You may append any relevant attachment to ...
Strategic Sourcing Quantitative Decision-Making and Analytics Mo.docxcpatriciarpatricia
Strategic Sourcing
Quantitative Decision-Making and Analytics Module 1
Comprehensive Assignment (2/7)
Please submit along with your Excel file.
If you have forgotten the name of Excel formulas, please see the lecture note I distributed on 2/5.
1. What is the most likely outcome from BackOrderData case? How would incorporate the result into you inventory (purchasing/preparation) decision? Please offer a comparison before and after your decision is augmented through the result.
2. Please normalize following data first. (Use Standardize() formula)
a. X = 34, Mean = 45, STDev.P = 6
b. X = 678, Mean = 890, STDev.S = 60, n = 36, degree of freem = 35
3. What does the normalized value tell us (in question 2) about the probability of being smaller than the value?(hint: see visualization below.)
a. Hint: =Norm.S.Dist(NormalizedValue of a, True)
b. Hint: = T.Dist(NormalizedValue of b, df = 35, True)
4. Please review SafetyStock data. Each day, the company TTM receives same quantity of shipment from its supplier. However, every day, there will be uncertain number of items shipped DOA (dead on arrival, i.e., arrived damaged.) To prevent further loss occurring from the sales, TTM plans to keep some safety inventories in its warehouse. The data file is the historical record of DOA quantity during the past 13 months. Let us consider following questions.
1) Please find out probability of following number of DOA. (hint: PivotTable will help you save time.)
DOA Scenario
Probability of Number of No-show
0
1
2
3
4
5
2) Calculate the expected value (or the most likely DOA quantity in long-run). (hint: need to use SUMPRODUCT formula.)
3) Please use the expected value as the safety stock quantity in the same data. See how many losses in sales you could have prevented. (hint: may need to use IF formula.
IF((safety stock - DOA) > 0, DOA, safety stock)) since it is a different context, the logic is a little different than the class demo.)
Total Loss in Sales Due to DOA = 693
Total Prevented Loss when Safety Stock is Used =
5. Please open DistanceTraveled data and answer following questions.
1) (BONUS QUESTION) Please collect three group of random samples (each with size of 30) from the DistanceTraveled variable. What is the average of each sample? What is the average of all three averages? (hint: for sampling, the following data analytics module needs to be used. For further assistance, please see https://www.youtube.com/watch?v=5XrJcFmbpWI 1 min. 32 sec. You will need to use random sampling.)
Average of first sample of 30 =
Average of second sample of 30 =
Average of third sample of 30 =
Average of all three averages above =
2) Calculate the actual average of DistanceTraveled variable (for all 365 observations). Then compare it with the result in #(1). Which result in #5 is in the closest range with the actual mean? Intuitively, which one should be the closest one? Please share your thoughts. Any thoughts/reasonings is .
Explanations to the article on Copy-PastePVS-Studio
Many readers liked my article "Consequences of using the Copy-Paste method in C++ programming and how to deal with it" [1]. Scott Meyers [2] noticed it too and asked me how static analysis proper helped us to detect the errors described in the article.
The objective of this analysis is to quantify the factors that impact the landing distance of a commercial flight and built a linear regression model to predict the risk of landing overrun.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
1. Which is Safe to Invest
Insurance or Technology ?
Risk Modeling
Portfolio Diversification
Time Series Forecasting
Nikhil Shrivastava
1
2. Executive Summary
OBJECTIVE
Approach
Conclusion
Using portfolio diversification and risk modeling techniques determine if
Insurance portfolio is less volatile than Tech portfolio
Two different techniques were applied, first assuming returns follow Normal Distribution,
ARIMA model was used. After plotting residuals to observe heteroscedasticity and
conditional variance ARMA+GARCH model was used with Copula
It is determined there is 5% chance that Insurance portfolio would lose 1.35 % with
expected shortfall of 1.84%. On the other hand there is 5% chance that Tech portfolio
would lose 1.83% with expected shortfall of 2.43%. Hence on any next day (period)
Insurance is expected to lose less than Tech.
2
3. Hypothesis
• Tech industry may experience larger loss than Insurance. Or Tech industry is more
volatile than Insurance industry.
• For the purpose of this project, 3 assets in each portfolio are chosen as a representative
of the industry.
• It is assumed that
• Insurance industry is only comprised of three Fortune 500 Insurance carriers- Chubb, Travelers and
Prudential.
• Tech industry is only comprised of Fortune 500 tech giants – Amazon, Google and Facebook
3
4. Data & Plots
• Yahoo finance historical data was obtained for all the companies in both portfolio
• For the purpose of this analysis, “Adj Close” price is used. Below two portfolios of assets have been created to further
this analysis.
• InsuRet: This is the Insurance portfolio and contains the cleaned complete cases of return series of Chubb, Travelers
and Prudential with time series attribute.
• TechRet: This is the Tech portfolio and contains the cleaned complete cases return series of Amazon, Google and
Facebook with time series attribute.
• Insuloss = - 1 * InsuRet & TechLoss = -1 * TechRet
• It can be observed that from below histograms that losses are not truly normally distributed, they look leptokurtic
4
5. Approach
5
• Two methods were used to predict the Value at Risk of Portfolio
1. Assuming Return/loss Follow Normal Distribution
𝐸 𝑅 𝑝 =
𝑖
𝑤𝑖 𝐸[𝑅𝑖]
𝐸 𝑅 𝑝 :return on the portfolio, 𝑤𝑖:weight of asset 𝑖 in the portfolio, 𝐸 𝑅𝑖 :expected return of asset 𝑖
2. Plots of all returns/loss showed signs of high kurtosis hence for Non-Normal Distribution used
ARMA + GARCH + Copula
6. Value at Risk, VaR - Portfolio
6
𝑉𝑎𝑅 = 𝑉0 𝛼𝜎p
𝜎 𝑝
2
= 𝑤 𝐴
2
𝜎𝐴
2
+ 𝑤 𝐵
2
𝜎 𝐵
2
+ 𝑤 𝐶
2
𝜎 𝐶
2
+ 2𝑤 𝐴 𝜎𝐴 𝜌 𝐴𝐵 𝑤 𝐵 𝜎 𝐵 + 2𝑤 𝐴 𝜎𝐴 𝜌 𝐴𝐶 𝑤 𝐶 𝜎 𝐶 + 2𝑤 𝐵 𝜎 𝐵 𝜌 𝐵𝐶 𝑤 𝐶 𝜎 𝐶
• Results show at 95% confidence interval, there is 5% chance Tech portfolio will lose 2.52% whereas
Insurance portfolio will lose 1.38%.
8. Residuals from ARIMA: Conditional Heteroscedasticity
8
• Showing only 2 residuals plots . FB from Tech portfolio and Chubb from Insurance portfolio . It is evident
from below that residuals have conditional heteroscedasticity.
9. ARMA + GARCH + Copula
9
To address conditional heteroscedasticity and volatility clustering ARMA + GARCH models was built by following steps
1. Specify and estimate the GARCH models for each loss factor.
gfitTech<-lapply(TechLoss,garchFit,formula=~arma(0,1)+garch(1,1), cond.dist="std",trace=FALSE)
gfitInsu<-lapply(Insuloss,garchFit,formula= ~arma(0,1)+garch(1,1), cond.dist="std",trace=FALSE)
Coefficient(s):
mu ma1 omega alpha1 beta1 shape
-0.176586 0.014660 0.094656 0.041029 0.915025 3.299001
Std. Errors:
based on Hessian
Error Analysis:
Estimate Std. Error t value Pr(>|t|)
mu -0.17659 0.06840 -2.582 0.00983 **
ma1 0.01466 0.06312 0.232 0.81633
omega 0.09466 0.08825 1.073 0.28347
alpha1 0.04103 0.02797 1.467 0.14245
beta1 0.91502 0.05540 16.516 < 2e-16 ***
shape 3.29900 0.69503 4.747 2.07e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
• mu - intercept of the return/loss ARMA equation 0,1
• ar1 - first lag return
• omega - intercept of conditional standard equation
• alpha1 - lagged squared error
• beta1 - lagged conditional variance
• shape - from student t distribution
10. ARMA + GARCH + Copula
10
2. Estimate Degree-of-freedom parameters for the GARCH model of each asset:
gshapeTech<-unlist(lapply(gfitTech, function(x) x@fit$coef[6]))
gshapeInsu<-unlist(lapply(gfitInsu, function(x) x@fit$coef[6]))
We have to take coefficient that determine the shape, which is 6 in both cases:
11. ARMA + GARCH + Copula
11
3. Determine the standardized residuals :
gresidTech<-as.matrix(data.frame(lapply(gfitTech,function(x) x@residuals / sqrt(x@h.t))))
gresidInsu<-as.matrix(data.frame(lapply(gfitInsu,function(x) x@residuals / sqrt(x@h.t))))
4. Calculate the pseudo-uniform variables from the standardized residuals :
U_Tech <- sapply(1:3, function(y) pt(gresidTech[, y], df = gshapeTech[y]))
U_Insu <- sapply(1:3, function(y) pt(gresidInsu[, y], df = gshapeInsu[y]))
5. Estimate the copula model using kendall method :
cop_Tech <- fit.tcopula(Udata = U_Tech, method = "Kendall")
cop_Insu <- fit.tcopula(Udata = U_Insu, method = "Kendall")
Kendall method describes the joint marginal distribution for the three pseudo-uniform variates. Below mentioned Kendall
correlation matrix and nu are obtained :
12. ARMA + GARCH + Copula
12
6. Use the dependence structure determined by the estimated copula for generating N data sets of random variates for the
pseudo-uniformly distributed variables.
Histogram of rcop_Tech and rcop_Insu shows values between 0 and 1 and is uniformly distributed. Examples shown
7. Compute the quantiles for these Monte Carlo draws.
qcop_Tech <- sapply(1:3, function(x) qstd(rcop_Tech[, x], nu = gshapeTech[x]))
qcop_Insu <- sapply(1:3, function(x) qstd(rcop_Insu[, x], nu = gshapeInsu[x]))
13. ARMA + GARCH + Copula
13
8. Create a matix of 1 period ahead predictions of standard deviations. The matrix has 100,000 rows and 3 columns. Labeled
the matrix as "ht.mat".
Tech_ht.mat <- matrix(gprogTech, nrow = 100000, ncol = ncol(TechLoss), byrow = TRUE)
Insu_ht.mat <- matrix(gprogInsu, nrow = 100000, ncol = ncol(Insuloss), byrow = TRUE)
9. Use these quantiles in conjunction with the weight vector to calculate the N portfolio return scenarios. Here weight vector
was obtained by global minimum variance portfolio method.
Tech_pfall <- (qcop_Tech * Tech_ht.mat) %*% wTech
Insu_pfall <- (qcop_Insu * Insu_ht.mat) %*% wInsu
10. Finally, used this series for the calculation Expected Shortfall value of risk for the "global minimum variance portfolio"
with 95% confidence.
Tech_pfall.es95 <- median(tail(sort(Tech_pfall), 5000))
Tech_pfall.var95 <- min(tail(sort(Tech_pfall), 5000))
Insu_pfall.es95 <- median(tail(sort(Insu_pfall), 5000))
Insu_pfall.var95 <- min(tail(sort(Insu_pfall), 5000))
#For Tech Portfolio
TechGMV<-PGMV(Techcov)
www<-as.numeric(Weights(TechGMV))/100
wAMZN<-www[1]
wFB<-www[2]
wGOOG<-www[3]
wTech<-c(wAMZN,wFB,wGOOG)
#For Insu Portfolio
InsuGMV<-PGMV(Insucov)
www<-as.numeric(Weights(InsuGMV))/100
wCB<-www[1]
wPRU<-www[2]
wTRV<-www[3]
wInsu <- c(wCB,wPRU,wTRV)
14. Results and Conclusion
14
Tech_pfall.es95 # 2.41
Tech_pfall.var95 # 1.83
Insu_pfall.es95 # 1.833
Insu_pfall.var95 #1.353
• Based on above result we conclude our hypothesis that:
• There is 5% chance that Tech portfolio would lose 1.83% or more on next day (period) with expected shortfall of
2.41%. Meaning average losses could be around 2.41%
and
• There is 5% chance that Insurance portfolio would lose 1.35% or more on next day. With average loss of 1.8%
• Comparing it with previous VaR on slide 6, where it was determined that there is 5% chance that Insurance portfolio
will lose 1.38% is not huge different but ARMA+GARCH+Copula model is more accurate by .03%
• Similarly comparing for Tech portfolio earlier it was determined it may lose 2.52% where as ARMA+GARCH+Copula
determined 1.83% and tell us that earlier the risk was over-estimated
• Hence, Insurance portfolio is safer than Tech portfolio, and as these were representations of Insurance and Tech
industry, it is safe to invest in Insurance industry than in Tech industry.
15. Future Exploration
15
1. Using Different Methods to Obtain Weights
PAveDD() : Portfolio optimization with average draw-down constraint
PCDaR() : Portfolio optimization with conditional draw-down at-risk constraint
PERC() : equal risk contributed portfolios
PGMV() : global minimum variance portfolio
PMD() : most diversified portfolio
PMTD() : minimum tail-dependent portfolio
PMaxDD() : portfolio optimization with maximum draw-down constraint
PMinCDaR() : portfolio optimization for minimum conditional draw-down at risk
2. Understanding True Diversification
• Exploration of Asset Classes.