The average weekly earnings of google shares are $1.005 per share with a standard deviation of $0.045. The distribution of returns is more or less symmetric but have high peak. Normality test (Jarque Bera Test) rejects hypothesis of normality for earnings data at 5% level.
Jarque Bera Test
data: returns
X-squared = 54.1642, df = 2, p-value = 1.731e-12
The distribution of log(returns) is also not closer to normal distribution, with symmetry close to 0 but kurtosis statistics is 1.5 which is far from normal distribution statistic. The Jarque-Bera test is significant (p-value < 0.05) indicating that hypothesis of normal distribution for log(returns) can be rejected.
Jarque Bera Test
data: rets
X-squared = 49.4632, df = 2, p-value = 1.816e-11
The following plot is time series plot of weekly price of google shares which shows increasing trend.
Time series Plot of Google Prices
Time plot of returns show that google stock returns shows periods of high volatility at various times. Most returns are between +/- 10%. Sample moments show that distribution of log returns is somewhat symmetric with very thin tails (kurtosis = 1.56).
To check null hypothesis of non stationarity:
Dickey Fuller Test is expressed as H0: φ1=1 vs Ha: φ1<>1 where the null hypothesis indicates unit-root non-stationarity.
The Dickey-Fuller test shows that the earnings time series is unit-root stationary. The test p-values for lags 7 is less than 0.05, therefore the null hypothesis of non stationarity can be rejected (small p-values< 0.05).
Augmented Dickey-Fuller Test
data: rets
Dickey-Fuller = -7.1264, Lag order = 7, p-value = 0.01
alternative hypothesis: stationary
To check serial correlations in the log returns:
Log returns are not serially correlated as shown by the Ljung-Box test with p-values > 0.05 and the autocorrelation plot.
Box-Pierce test
data: coredata(rets)
X-squared = 0.686, df = 1, p-value = 0.4075
To look for evidence of ARCH effects in the log returns:
The analysis below shows a strong ARCH effect. The squared returns are strongly correlated. The Ljung-Box tests on squared residuals are highly significant with p-values < 0.005, and autocorrelation plots shows large autocorrelations for the first 15 lags.
Box-Pierce test
data: coredata(rets^2)
X-squared = 8.1483, df = 1, p-value = 0.00431
Plot of PACF: There is no correlation till lag 10 which shows there is no AR lag.
To fit an ARMA(0,0)-GARCH(2,1) model for the log returns using a normal- distribution for the error terms:
Fitted Model:
Residual Analysis:
Adjusted Pearson Goodness-of-Fit Test:
------------------------------------
group statistic p-value(g-1)
1 20 28.78 0.06948
2 30 38.57 0.11019
3 40 46.39 0.19380
4 50 51.15 0.38929
Above output shows that error terms are normally distributed as all the P values are greater than 0.05.
To fit ARMA(0,0)-eGARCH(1,1) model with Gaussian distribution:
Fitted Model is:
Residual Analysis:
...
Logistic Regression in Case-Control StudySatish Gupta
This document provides an introduction to using logistic regression in R to analyze case-control studies. It explains how to download and install R, perform basic operations and calculations, handle data, load libraries, and conduct both conditional and unconditional logistic regression. Conditional logistic regression is recommended for matched case-control studies as it provides unbiased results. The document demonstrates how to perform logistic regression on a lung cancer dataset to analyze the association between disease status and genetic and environmental factors.
Learn to compare objects in R using built-in comparison functions. This tutorial is part of the Working With Data module of the R Programming course offered by r-squared.
This document provides examples and explanations for statistical concepts covered on a final exam, including the normal distribution, hypothesis testing, and probability distributions. It includes sample problems calculating probabilities and critical values for hypothesis tests on means and proportions. Excel templates are referenced for finding probabilities based on the standard normal and Poisson distributions. Step-by-step workings are shown for several problems to illustrate statistical calculations and interpretations.
The document discusses analyzing multivariate time series of five energy futures (crude oil, ethanol, gasoline, heating oil, natural gas) using vector autoregressive (VAR) and vector error correction (VEC) models. It finds the futures are cointegrated using Johansen and Engle-Granger tests, indicating they share a common stochastic trend. A VAR(1) model is estimated and found stable. The VEC model captures the error correction behavior as futures return to their long-run equilibrium. Forecasts are generated and limitations of the Engle-Granger approach discussed.
This document discusses using bootstrap methods to create confidence intervals for time series forecasts. It provides examples of time series data and introduces the AR(1) model. The document describes an algorithm for calculating a bootstrap confidence interval for forecasting from an AR(1) model. It then discusses a simulation study comparing empirical coverage rates of bootstrap confidence intervals under different parameters. Finally, it applies the bootstrap method to forecasting Gross National Product growth, comparing the results to a parametric approach.
This document discusses using bootstrap methods to create confidence intervals for time series forecasts. It provides background on time series models and the autoregressive (AR) process. It then presents an algorithm for calculating a bootstrap confidence interval for forecasts from an AR(1) model. A simulation study compares coverage rates for bootstrap confidence intervals under different parameters. Finally, the method is applied to US Gross National Product data to forecast and construct confidence intervals.
I am Hannah Lucy. Currently associated with excelhomeworkhelp.com as excel homework helper. After completing my master's from Kean University, USA, I was in search of an opportunity that expands my area of knowledge hence I decided to help students with their homework. I have written several excel homework till date to help students overcome numerous difficulties they face.
Logistic Regression in Case-Control StudySatish Gupta
This document provides an introduction to using logistic regression in R to analyze case-control studies. It explains how to download and install R, perform basic operations and calculations, handle data, load libraries, and conduct both conditional and unconditional logistic regression. Conditional logistic regression is recommended for matched case-control studies as it provides unbiased results. The document demonstrates how to perform logistic regression on a lung cancer dataset to analyze the association between disease status and genetic and environmental factors.
Learn to compare objects in R using built-in comparison functions. This tutorial is part of the Working With Data module of the R Programming course offered by r-squared.
This document provides examples and explanations for statistical concepts covered on a final exam, including the normal distribution, hypothesis testing, and probability distributions. It includes sample problems calculating probabilities and critical values for hypothesis tests on means and proportions. Excel templates are referenced for finding probabilities based on the standard normal and Poisson distributions. Step-by-step workings are shown for several problems to illustrate statistical calculations and interpretations.
The document discusses analyzing multivariate time series of five energy futures (crude oil, ethanol, gasoline, heating oil, natural gas) using vector autoregressive (VAR) and vector error correction (VEC) models. It finds the futures are cointegrated using Johansen and Engle-Granger tests, indicating they share a common stochastic trend. A VAR(1) model is estimated and found stable. The VEC model captures the error correction behavior as futures return to their long-run equilibrium. Forecasts are generated and limitations of the Engle-Granger approach discussed.
This document discusses using bootstrap methods to create confidence intervals for time series forecasts. It provides examples of time series data and introduces the AR(1) model. The document describes an algorithm for calculating a bootstrap confidence interval for forecasting from an AR(1) model. It then discusses a simulation study comparing empirical coverage rates of bootstrap confidence intervals under different parameters. Finally, it applies the bootstrap method to forecasting Gross National Product growth, comparing the results to a parametric approach.
This document discusses using bootstrap methods to create confidence intervals for time series forecasts. It provides background on time series models and the autoregressive (AR) process. It then presents an algorithm for calculating a bootstrap confidence interval for forecasts from an AR(1) model. A simulation study compares coverage rates for bootstrap confidence intervals under different parameters. Finally, the method is applied to US Gross National Product data to forecast and construct confidence intervals.
I am Hannah Lucy. Currently associated with excelhomeworkhelp.com as excel homework helper. After completing my master's from Kean University, USA, I was in search of an opportunity that expands my area of knowledge hence I decided to help students with their homework. I have written several excel homework till date to help students overcome numerous difficulties they face.
This paper presents a novel SAT-based approach for the computation of extensions in abstract argumentation, with focus on preferred semantics, and an empirical evaluation of its performances. The approach is based on the idea of reducing the problem of computing complete extensions to a SAT problem and then using a depth-first search method to derive preferred extensions. The proposed approach has been tested using two distinct SAT solvers and compared with three state-of-the-art systems for preferred extension computation. It turns out that the proposed approach delivers significantly better performances in the large majority of the considered cases.
This document describes the design and implementation of a control system for an inverted pendulum system using state feedback controllers. Several control methods are explored, including pole placement, state feedback, observer design, and linear quadratic regulation (LQR). Simulations are performed in MATLAB to analyze the time response of the closed-loop system and ensure it meets design requirements of settling time less than 0.4 seconds and pendulum angle maintained below 0.2 radians.
This document discusses basic loops and functions in R programming. It covers control statements like loops and if/else, arithmetic and boolean operators, default argument values, and returning values from functions. It also describes R programming structures, recursion, and provides an example of implementing quicksort recursively and constructing a binary search tree. The key topics are loops, control flow, functions, recursion, and examples of sorting and binary trees.
This document discusses autocorrelation in the context of time series regression analysis. It begins by defining autocorrelation as correlation between observations in a time series. When autocorrelation is present, the assumptions of the classical linear regression model are violated. The document then discusses some potential causes of autocorrelation, including omitted variables, incorrect functional form, and exclusion of lagged variables. It proceeds to describe several tests to detect autocorrelation, including graphical tests, the runs test, Durbin-Watson test, and Breusch-Godfrey test. The document concludes by outlining some remedial measures that can be taken if autocorrelation is present, such as generalized least squares and first differencing transformations.
I am Hannah Lucy. Currently associated with statisticshomeworkhelper.com as statistics homework helper. After completing my master's from Kean University, USA, I was in search of an opportunity that expands my area of knowledge hence I decided to help students with their homework. I have written several statistics homework till date to help students overcome numerous difficulties they face.
The document discusses representations and operations on polynomials. It describes how polynomials can be represented as lists of coefficients. It also explains how to perform basic polynomial operations like evaluation, addition, multiplication, division, and finding the greatest common divisor (GCD) of two polynomials. Horner's method and the Euclidean algorithm are key algorithms discussed for efficiently evaluating and finding the GCD of polynomials.
The document describes statistical analyses performed on data from 150 observations of 7 variables for 30 manufacturing companies over 5 years. Descriptive statistics are presented showing minimum, maximum, average and standard deviation for each variable. Normality tests using P-P Plots indicate the data are normally distributed. Multicollinearity and autocorrelation tests were also conducted on regression models to ensure they met assumptions of no multicollinearity and no autocorrelation.
This document discusses logistic regression for categorical response variables. It provides examples of binary and ordinal categorical response variables like whether someone smokes (yes/no) or the success of a medical treatment (survives/dies). It then demonstrates how to perform binary logistic regression in R to predict a binary outcome like gender from height. Key aspects covered include interpreting the logistic regression coefficients, plotting the logistic curve, and calculating odds ratios to compare two groups.
Time Series Analysis on Egg depositions (in millions) of age-3 Lake Huron Blo...ShuaiGao3
This assignment’s task is to analyze the egg depositions of Lake Huron Bloasters by using the analysis methods and choose the best model among a set of possible models for this data-set and give forecasts of egg depositions for the next 5 years. The data-set is collect from FSAdata package, we will directly use the eggs data provide by this data-set.
David Russo worked in the Wang Lab during summer 2010 where he improved an existing program called Patser that identifies conserved regions in DNA that could serve as transcription factor binding sites. He analyzed potential binding sites for transcription factors in the IL22 gene by running sequences through Patser with and without integrating conservation scores. The results identified several transcription factors where Patser, Patser/PhastCons, and Patser/PhyloP identified binding sites in the same regions.
Okay, here are the steps to convert each score to a z-score:
For history test:
Z = (X - Mean) / Standard Deviation
Z = (78 - 79) / 6
Z = -0.167
For math test:
Z = (X - Mean) / Standard Deviation
Z = (82 - 84) / 5
Z = 0.8
So the z-score for the history test is -0.167 and the z-score for the math test is 0.8.
Predicting US house prices using Multiple Linear Regression in RSotiris Baratsas
In this study, we attempted to formulate a Multiple Linear Regression model, to predict US house prices.
Steps involved:
Perform descriptive analysis and visualisation for each variable to get an initial insight of what the data looks like.
Conduct pairwise comparisons between the variables in the dataset to investigate if there are any associations implied by the dataset.
Construct a model for the expected selling prices according to the remaining features. Check whether this linear model fits well to the data.
Find the best model for predicting the selling prices and select the appropriate features using stepwise methods (used Forward, Backward and Stepwise procedures according to AIC or BIC to choose which variables appear to be more significant for predicting selling prices).
Get the summary of our final model, interpret the coefficients. Comment on the significance of each coefficient and write down the mathematical formulation of the model. Consider whether the intercept should be excluded from our model.
Check the assumptions of your final model. Are the assumptions satisfied? If not, what is the impact of the violation of the assumption not satisfied in terms of inference? What could someone do about it?
Conduct LASSO as a variable selection technique and compare the variables that we end up having using LASSO to the variables that you ended up having using stepwise methods.
This document provides an overview of separation logic, including:
- Applications include program analysis, verified software, and axiomatic semantics.
- Future work may focus on logics beyond pre/post conditions to specify order of actions or observable program states.
- SpaceInvader is an implementation of compositional shape analysis via bi-abduction that uses separation logic to reason about mutable data structures.
- Smallfoot is an earlier tool that used symbolic execution and a decidable fragment of separation logic to perform automatic reasoning with Hoare logic for a toy language.
The document presents calculations to compare the means of two samples (a and b) and test if they are statistically different. It first calculates the sample means, pooled variance, standard error, and 95% confidence interval. The test statistic is the difference in means divided by the standard error. The p-value is calculated as twice the area under the t distribution curve to the left of the test statistic, as this is a two-tailed test. The confidence interval and p-value calculations are also shown assuming a normal distribution with equal variances.
Week 4 Lecture 12 Significance Earlier we discussed co.docxcockekeshia
Week 4 Lecture 12
Significance
Earlier we discussed correlations without going into how we can identify statistically
significant values. Our approach to this uses the t-test. Unfortunately, Excel does not
automatically produce this form of the t-test, but setting it up within an Excel cell is fairly easy.
And, with some slight algebra, we can determine the minimum value that is statistically
significant for any table of correlations all of which have the same number of pairs (for example,
a Correlation table for our data set would use 50 pairs of values, since we have 50 members in
our sample).
The t-test formula for a correlation (r) is t = r * sqrt(n-2)/sqrt(1-r2); the associated degrees
of freedom are n-2 (number of pairs – 2) (Lind, Marchel, & Wathen, 2008). For some this might
look a bit off-putting, but remember that we can translate this into Excel cells and functions and
have Excel do the arithmetic for us.
Excel Example
If we go back to our correlation table for salary, midpoint, Age, Perf Rat, Service, and
Raise, we have:
Using Excel to create the formula and cell numbers for our key values allows us to
quickly create a result. The T.dist.2t gives us a p-value easily.
The formula to use in finding the minimum correlation value that is statistically
significant is r = sqrt(t^2/(t^2 + n-2)). We would find the appropriate t value by using the
t.inv.2T(alpha, df) with alpha = 0.05 and df = n-2 or 48. Plugging these values into the gives us
a t-value of 2.0106 or 2.011(rounded).
Putting 2.011 and 48 (n-2) into our formula gives us a r value of 0.278; therefore, in a
correlation table based on 50 pairs, any correlation greater or equal to 0.278 would be
statistically significant.
Technical Point. If you are interested in how we obtained the formula for determining
the minimum r value, the approach is shown below. If you are not interested in the math, you
can safely skip this paragraph.
t = r* sqrt(n-2)/sqrt(1-r2)
Multiplying gives us t *sqrt (1- r2) = r2* (n-2)
Squaring gives us: t2 * (1- r2) = r2* (n-2)
Multiplying out gives us: t2– t2* r2 = n r2-2* r2
Adding gives us: t2= n* r2-2*r2+ t2 *r2
Factoring gives us t2= r2 *(n -2+ t2)
Dividing gives us t2 / (n -2+ t2) = r2
Taking the square root gives us r = sqrt (t2 / (n -2+ t2)
Effect Size Measures
As we have discussed, there is a difference between statistical and practical
significance. Virtually any statistic can become statistically significant if the sample is large
enough. In practical terms, a correlation of .30 and below is generally considered too weak to be
of any practical significance. Additionally, the effect size measure for Pearson’s correlation is
simply the absolute value of the correlation; the outcome has the same general interpretation as
Cohen’s D for the t-test (0.8 is strong, and 0.2 is quite weak, for example) (Tanner & Youssef-
Morgan, 2013).
Spearman’s Rank Correlation
Another typ.
The document compares linear regression using gradient descent and normal equations on two datasets. For the FRIED dataset, gradient descent without regularization had the best results. Adding higher degree polynomials and variable multiplications increased the model complexity but led to overfitting. For the ABALONE dataset, gradient descent with lambda=0.03 performed best. Normal equations was faster for the smaller ABALONE dataset but slower for the larger FRIED dataset due to its cubic runtime complexity. Increasing the model complexity provided better fits to the training data but risked overfitting.
The changes required in the IT project plan for Telecomm Ltd would.docxmattinsonjanel
The changes required in the IT project plan for Telecomm Ltd would entail specific variation in the platforms used in the initial implementation plan. Initially, the three projects that were planned for implementation included; the installation of business intelligence platform, the implementation of Statistical Analysis System software technology, and the creation of an effectively network infrastructure. In this case, the changes would include an addition of an ERP software to ensure the performance of the workforce within the Telecomms Ltd employees.
ERP is an effectively coordinated information technology system that would ensure the company’s performance is enhanced. To understand how the implementation of a coordinated IT system offers a competitive advantage of a firm, it is essential to acknowledge three core reasons for the failure of information technology related projects as commonly cited by IT managers. In this case, IT managers cite the three reasons as; poor planning or management, change in business objectives and goals during the implementation process of a project, and lack of proper management support completion (Houston, 2011). Also, in the majority of completed projects, technology is usually deployed in a vacuum; hence users resist it. The implementation of coordinated information technology systems, such as ERP would provide an ultimate solution to the three reasons for failure, and thus would give Telecomms Ltd a competitive advantage in the already competitive market. Since the implementation of systems like ERP directly provides solution to common problems that act as drawbacks regarding the competitiveness of firm, it is, therefore, evident that its use place Telecomms Ltd above its rival companies in the market share (Wallace & Kremzar, 2001).
The use ERP, which is a reliable coordinated IT system entails three distinctive implementation strategies that a firm can choose depending on its specific needs. The changes in the projects would be as follows: The three implementation strategies are independently capable of providing a relatively competitive advantage for many companies. These strategies are: big bang, phased rollout, and parallel adoption. In the big bang implementation strategy, happens in a single instance, whereby all the users are moved to a new system on a designated (Wallace & Kremzar, 2001). The phased rollout implementation on the other hand usually involves a changeover in several phases, and it is executed in an extended period. In this case, the users move onto the new system in a series of steps (Houston, 2011). Lastly, the parallel adoption implementation strategy allows both legacy and the new ERP system to run at the same time. It is also essential to note that users in this strategy get to learn the new system while still working on the old system (Wallace & Kremzar, 2001). The three strategies effectively change the information system of Telecomms Ltd tremendously such that it positiv ...
The Catholic University of America Metropolitan School of .docxmattinsonjanel
The Catholic University of America
Metropolitan School of Professional Studies
Course Syllabus
THE CATHOLIC UNIVERSITY OF AMERICA
Metropolitan School of Professional Studies
MBU 514 and MBU 315 Leadership Foundations
Fall 2015
Credits: 3
Classroom: Online
Dates: August 31, 2015 to December 14, 2015
Instructor:
Dr. Jacquie Hamp
Email: [email protected]
Twitter: @drjacquie
Telephone: 202 215 8117 cell
Office Hours: By Appointment
Dr. Jacquie Hamp is an educator, coach and consultant with particular expertise in leadership development, organizational development and human resources development strategy. From 2006 to 2015 she held the position as the Senior Director of Leadership Development for Goodwill Industries International in Rockville, Maryland. Dr. Hamp was responsible for the design and execution of leadership development programs and activities for all levels of the 4 billion dollar social enterprise network of Goodwill Industries across 165 independent local agencies. Jacquie is also a part time Associate Professor at George Washington University teaching at the graduate level and she is an adjunct professor at Catholic University of America, teaching leadership theory in the Masters Program.
Jacquie has a Master of Science degree in Human Resources Development Administration from Barry University. She holds a Doctor of Education degree in Human and Organizational Learning from the Graduate School of Education and Human Development at George Washington University. Jacquie has received a certificate in Executive Coaching from Georgetown University, a certificate in the Practice of Teaching Leadership from Harvard University and holds the national certification of Senior Professional in Human Resources (SPHR).
Jacquie has been invited to speak at conferences in the United States and the United Kingdom on the topic of how women learn through transformative experiences and techniques for effective leadership development in the social enterprise sector. She is a member of the Society of Human Resource Management (SHRM) and the International Leadership Association (ILA). In 2011 Dr. Hamp was awarded the Strategic Alignment Award by the Human Resources Leadership Association of Washington DC for her work in the redesign of the Goodwill Industries International leadership programs in order to meet the strategic goals of the organization.
Course Description: Surveys, compares, and contrasts contemporary theories of leadership, providing students the opportunity to assess their own leadership competencies and how they fit in with models of leadership. Students also discuss current literature, media coverage, and case studies on leadership issues.
Instructional Methods This course is based on the following adult learning concepts:
1. Learning is done by the learners, who are encouraged to achieve the overall course objectives through individual learning styles that meet their personal learning needs. ...
More Related Content
Similar to The average weekly earnings of google shares are $1.005 per share .docx
This paper presents a novel SAT-based approach for the computation of extensions in abstract argumentation, with focus on preferred semantics, and an empirical evaluation of its performances. The approach is based on the idea of reducing the problem of computing complete extensions to a SAT problem and then using a depth-first search method to derive preferred extensions. The proposed approach has been tested using two distinct SAT solvers and compared with three state-of-the-art systems for preferred extension computation. It turns out that the proposed approach delivers significantly better performances in the large majority of the considered cases.
This document describes the design and implementation of a control system for an inverted pendulum system using state feedback controllers. Several control methods are explored, including pole placement, state feedback, observer design, and linear quadratic regulation (LQR). Simulations are performed in MATLAB to analyze the time response of the closed-loop system and ensure it meets design requirements of settling time less than 0.4 seconds and pendulum angle maintained below 0.2 radians.
This document discusses basic loops and functions in R programming. It covers control statements like loops and if/else, arithmetic and boolean operators, default argument values, and returning values from functions. It also describes R programming structures, recursion, and provides an example of implementing quicksort recursively and constructing a binary search tree. The key topics are loops, control flow, functions, recursion, and examples of sorting and binary trees.
This document discusses autocorrelation in the context of time series regression analysis. It begins by defining autocorrelation as correlation between observations in a time series. When autocorrelation is present, the assumptions of the classical linear regression model are violated. The document then discusses some potential causes of autocorrelation, including omitted variables, incorrect functional form, and exclusion of lagged variables. It proceeds to describe several tests to detect autocorrelation, including graphical tests, the runs test, Durbin-Watson test, and Breusch-Godfrey test. The document concludes by outlining some remedial measures that can be taken if autocorrelation is present, such as generalized least squares and first differencing transformations.
I am Hannah Lucy. Currently associated with statisticshomeworkhelper.com as statistics homework helper. After completing my master's from Kean University, USA, I was in search of an opportunity that expands my area of knowledge hence I decided to help students with their homework. I have written several statistics homework till date to help students overcome numerous difficulties they face.
The document discusses representations and operations on polynomials. It describes how polynomials can be represented as lists of coefficients. It also explains how to perform basic polynomial operations like evaluation, addition, multiplication, division, and finding the greatest common divisor (GCD) of two polynomials. Horner's method and the Euclidean algorithm are key algorithms discussed for efficiently evaluating and finding the GCD of polynomials.
The document describes statistical analyses performed on data from 150 observations of 7 variables for 30 manufacturing companies over 5 years. Descriptive statistics are presented showing minimum, maximum, average and standard deviation for each variable. Normality tests using P-P Plots indicate the data are normally distributed. Multicollinearity and autocorrelation tests were also conducted on regression models to ensure they met assumptions of no multicollinearity and no autocorrelation.
This document discusses logistic regression for categorical response variables. It provides examples of binary and ordinal categorical response variables like whether someone smokes (yes/no) or the success of a medical treatment (survives/dies). It then demonstrates how to perform binary logistic regression in R to predict a binary outcome like gender from height. Key aspects covered include interpreting the logistic regression coefficients, plotting the logistic curve, and calculating odds ratios to compare two groups.
Time Series Analysis on Egg depositions (in millions) of age-3 Lake Huron Blo...ShuaiGao3
This assignment’s task is to analyze the egg depositions of Lake Huron Bloasters by using the analysis methods and choose the best model among a set of possible models for this data-set and give forecasts of egg depositions for the next 5 years. The data-set is collect from FSAdata package, we will directly use the eggs data provide by this data-set.
David Russo worked in the Wang Lab during summer 2010 where he improved an existing program called Patser that identifies conserved regions in DNA that could serve as transcription factor binding sites. He analyzed potential binding sites for transcription factors in the IL22 gene by running sequences through Patser with and without integrating conservation scores. The results identified several transcription factors where Patser, Patser/PhastCons, and Patser/PhyloP identified binding sites in the same regions.
Okay, here are the steps to convert each score to a z-score:
For history test:
Z = (X - Mean) / Standard Deviation
Z = (78 - 79) / 6
Z = -0.167
For math test:
Z = (X - Mean) / Standard Deviation
Z = (82 - 84) / 5
Z = 0.8
So the z-score for the history test is -0.167 and the z-score for the math test is 0.8.
Predicting US house prices using Multiple Linear Regression in RSotiris Baratsas
In this study, we attempted to formulate a Multiple Linear Regression model, to predict US house prices.
Steps involved:
Perform descriptive analysis and visualisation for each variable to get an initial insight of what the data looks like.
Conduct pairwise comparisons between the variables in the dataset to investigate if there are any associations implied by the dataset.
Construct a model for the expected selling prices according to the remaining features. Check whether this linear model fits well to the data.
Find the best model for predicting the selling prices and select the appropriate features using stepwise methods (used Forward, Backward and Stepwise procedures according to AIC or BIC to choose which variables appear to be more significant for predicting selling prices).
Get the summary of our final model, interpret the coefficients. Comment on the significance of each coefficient and write down the mathematical formulation of the model. Consider whether the intercept should be excluded from our model.
Check the assumptions of your final model. Are the assumptions satisfied? If not, what is the impact of the violation of the assumption not satisfied in terms of inference? What could someone do about it?
Conduct LASSO as a variable selection technique and compare the variables that we end up having using LASSO to the variables that you ended up having using stepwise methods.
This document provides an overview of separation logic, including:
- Applications include program analysis, verified software, and axiomatic semantics.
- Future work may focus on logics beyond pre/post conditions to specify order of actions or observable program states.
- SpaceInvader is an implementation of compositional shape analysis via bi-abduction that uses separation logic to reason about mutable data structures.
- Smallfoot is an earlier tool that used symbolic execution and a decidable fragment of separation logic to perform automatic reasoning with Hoare logic for a toy language.
The document presents calculations to compare the means of two samples (a and b) and test if they are statistically different. It first calculates the sample means, pooled variance, standard error, and 95% confidence interval. The test statistic is the difference in means divided by the standard error. The p-value is calculated as twice the area under the t distribution curve to the left of the test statistic, as this is a two-tailed test. The confidence interval and p-value calculations are also shown assuming a normal distribution with equal variances.
Week 4 Lecture 12 Significance Earlier we discussed co.docxcockekeshia
Week 4 Lecture 12
Significance
Earlier we discussed correlations without going into how we can identify statistically
significant values. Our approach to this uses the t-test. Unfortunately, Excel does not
automatically produce this form of the t-test, but setting it up within an Excel cell is fairly easy.
And, with some slight algebra, we can determine the minimum value that is statistically
significant for any table of correlations all of which have the same number of pairs (for example,
a Correlation table for our data set would use 50 pairs of values, since we have 50 members in
our sample).
The t-test formula for a correlation (r) is t = r * sqrt(n-2)/sqrt(1-r2); the associated degrees
of freedom are n-2 (number of pairs – 2) (Lind, Marchel, & Wathen, 2008). For some this might
look a bit off-putting, but remember that we can translate this into Excel cells and functions and
have Excel do the arithmetic for us.
Excel Example
If we go back to our correlation table for salary, midpoint, Age, Perf Rat, Service, and
Raise, we have:
Using Excel to create the formula and cell numbers for our key values allows us to
quickly create a result. The T.dist.2t gives us a p-value easily.
The formula to use in finding the minimum correlation value that is statistically
significant is r = sqrt(t^2/(t^2 + n-2)). We would find the appropriate t value by using the
t.inv.2T(alpha, df) with alpha = 0.05 and df = n-2 or 48. Plugging these values into the gives us
a t-value of 2.0106 or 2.011(rounded).
Putting 2.011 and 48 (n-2) into our formula gives us a r value of 0.278; therefore, in a
correlation table based on 50 pairs, any correlation greater or equal to 0.278 would be
statistically significant.
Technical Point. If you are interested in how we obtained the formula for determining
the minimum r value, the approach is shown below. If you are not interested in the math, you
can safely skip this paragraph.
t = r* sqrt(n-2)/sqrt(1-r2)
Multiplying gives us t *sqrt (1- r2) = r2* (n-2)
Squaring gives us: t2 * (1- r2) = r2* (n-2)
Multiplying out gives us: t2– t2* r2 = n r2-2* r2
Adding gives us: t2= n* r2-2*r2+ t2 *r2
Factoring gives us t2= r2 *(n -2+ t2)
Dividing gives us t2 / (n -2+ t2) = r2
Taking the square root gives us r = sqrt (t2 / (n -2+ t2)
Effect Size Measures
As we have discussed, there is a difference between statistical and practical
significance. Virtually any statistic can become statistically significant if the sample is large
enough. In practical terms, a correlation of .30 and below is generally considered too weak to be
of any practical significance. Additionally, the effect size measure for Pearson’s correlation is
simply the absolute value of the correlation; the outcome has the same general interpretation as
Cohen’s D for the t-test (0.8 is strong, and 0.2 is quite weak, for example) (Tanner & Youssef-
Morgan, 2013).
Spearman’s Rank Correlation
Another typ.
The document compares linear regression using gradient descent and normal equations on two datasets. For the FRIED dataset, gradient descent without regularization had the best results. Adding higher degree polynomials and variable multiplications increased the model complexity but led to overfitting. For the ABALONE dataset, gradient descent with lambda=0.03 performed best. Normal equations was faster for the smaller ABALONE dataset but slower for the larger FRIED dataset due to its cubic runtime complexity. Increasing the model complexity provided better fits to the training data but risked overfitting.
Similar to The average weekly earnings of google shares are $1.005 per share .docx (20)
The changes required in the IT project plan for Telecomm Ltd would.docxmattinsonjanel
The changes required in the IT project plan for Telecomm Ltd would entail specific variation in the platforms used in the initial implementation plan. Initially, the three projects that were planned for implementation included; the installation of business intelligence platform, the implementation of Statistical Analysis System software technology, and the creation of an effectively network infrastructure. In this case, the changes would include an addition of an ERP software to ensure the performance of the workforce within the Telecomms Ltd employees.
ERP is an effectively coordinated information technology system that would ensure the company’s performance is enhanced. To understand how the implementation of a coordinated IT system offers a competitive advantage of a firm, it is essential to acknowledge three core reasons for the failure of information technology related projects as commonly cited by IT managers. In this case, IT managers cite the three reasons as; poor planning or management, change in business objectives and goals during the implementation process of a project, and lack of proper management support completion (Houston, 2011). Also, in the majority of completed projects, technology is usually deployed in a vacuum; hence users resist it. The implementation of coordinated information technology systems, such as ERP would provide an ultimate solution to the three reasons for failure, and thus would give Telecomms Ltd a competitive advantage in the already competitive market. Since the implementation of systems like ERP directly provides solution to common problems that act as drawbacks regarding the competitiveness of firm, it is, therefore, evident that its use place Telecomms Ltd above its rival companies in the market share (Wallace & Kremzar, 2001).
The use ERP, which is a reliable coordinated IT system entails three distinctive implementation strategies that a firm can choose depending on its specific needs. The changes in the projects would be as follows: The three implementation strategies are independently capable of providing a relatively competitive advantage for many companies. These strategies are: big bang, phased rollout, and parallel adoption. In the big bang implementation strategy, happens in a single instance, whereby all the users are moved to a new system on a designated (Wallace & Kremzar, 2001). The phased rollout implementation on the other hand usually involves a changeover in several phases, and it is executed in an extended period. In this case, the users move onto the new system in a series of steps (Houston, 2011). Lastly, the parallel adoption implementation strategy allows both legacy and the new ERP system to run at the same time. It is also essential to note that users in this strategy get to learn the new system while still working on the old system (Wallace & Kremzar, 2001). The three strategies effectively change the information system of Telecomms Ltd tremendously such that it positiv ...
The Catholic University of America Metropolitan School of .docxmattinsonjanel
The Catholic University of America
Metropolitan School of Professional Studies
Course Syllabus
THE CATHOLIC UNIVERSITY OF AMERICA
Metropolitan School of Professional Studies
MBU 514 and MBU 315 Leadership Foundations
Fall 2015
Credits: 3
Classroom: Online
Dates: August 31, 2015 to December 14, 2015
Instructor:
Dr. Jacquie Hamp
Email: [email protected]
Twitter: @drjacquie
Telephone: 202 215 8117 cell
Office Hours: By Appointment
Dr. Jacquie Hamp is an educator, coach and consultant with particular expertise in leadership development, organizational development and human resources development strategy. From 2006 to 2015 she held the position as the Senior Director of Leadership Development for Goodwill Industries International in Rockville, Maryland. Dr. Hamp was responsible for the design and execution of leadership development programs and activities for all levels of the 4 billion dollar social enterprise network of Goodwill Industries across 165 independent local agencies. Jacquie is also a part time Associate Professor at George Washington University teaching at the graduate level and she is an adjunct professor at Catholic University of America, teaching leadership theory in the Masters Program.
Jacquie has a Master of Science degree in Human Resources Development Administration from Barry University. She holds a Doctor of Education degree in Human and Organizational Learning from the Graduate School of Education and Human Development at George Washington University. Jacquie has received a certificate in Executive Coaching from Georgetown University, a certificate in the Practice of Teaching Leadership from Harvard University and holds the national certification of Senior Professional in Human Resources (SPHR).
Jacquie has been invited to speak at conferences in the United States and the United Kingdom on the topic of how women learn through transformative experiences and techniques for effective leadership development in the social enterprise sector. She is a member of the Society of Human Resource Management (SHRM) and the International Leadership Association (ILA). In 2011 Dr. Hamp was awarded the Strategic Alignment Award by the Human Resources Leadership Association of Washington DC for her work in the redesign of the Goodwill Industries International leadership programs in order to meet the strategic goals of the organization.
Course Description: Surveys, compares, and contrasts contemporary theories of leadership, providing students the opportunity to assess their own leadership competencies and how they fit in with models of leadership. Students also discuss current literature, media coverage, and case studies on leadership issues.
Instructional Methods This course is based on the following adult learning concepts:
1. Learning is done by the learners, who are encouraged to achieve the overall course objectives through individual learning styles that meet their personal learning needs. ...
The Case of Frank and Judy. During the past few years Frank an.docxmattinsonjanel
The Case of Frank and Judy.
During the past few years Frank and Judy have experienced many conflicts in their marriage. Although they have made attempts to resolve their problems by themselves, they have finally decided to seek the help of a professional marriage counselor. Even though they have been thinking about divorce with increasing frequency, they still have some hope that they can achieve a satisfactory marriage.
Three couples counselors, each holding a different set of values pertaining to marriage and the family, describe their approach to working with Frank and Judy. As you read these responses, think about the degree to which each represents what you might say and do if you were counseling this couple.
· Counselor A. This counselor believes it is not her place to bring her values pertaining to the family into the sessions. She is fully aware of her biases regarding marriage and divorce, but she does not impose them or expose them in all cases. Her primary interest is to help Frank and Judy discover what is best for them as individuals 459460and as a couple. She sees it as unethical to push her clients toward a definite course of action, and she lets them know that her job is to help them be honest with themselves.
·
· What are your reactions to this counselor's approach?
· ▪ What values of yours could interfere with your work with Frank and Judy?
Counselor B. This counselor has been married three times herself. Although she believes in marriage, she is quick to maintain that far too many couples stay in their marriages and suffer unnecessarily. She explores with Judy and Frank the conflicts that they bring to the sessions. The counselor's interventions are leading them in the direction of divorce as the desired course of action, especially after they express this as an option. She suggests a trial separation and states her willingness to counsel them individually, with some joint sessions. When Frank brings up his guilt and reluctance to divorce because of the welfare of the children, the counselor confronts him with the harm that is being done to them by a destructive marriage. She tells him that it is too much of a burden to put on the children to keep the family together.
· ▪ What, if any, ethical issues do you see in this case? Is this counselor exposing or imposing her values?
· ▪ Do you think this person should be a marriage counselor, given her bias?
· ▪ What interventions made by the counselor do you agree with? What are your areas of disagreement?
Counselor C. At the first session this counselor states his belief in the preservation of marriage and the family. He believes that many couples give up too soon in the face of difficulty. He says that most couples have unrealistically high expectations of what constitutes a “happy marriage.” The counselor lets it be known that his experience continues to teach him that divorce rarely solves any problems but instead creates new problems that are often worse. The counsel ...
The Case of MikeChapter 5 • Common Theoretical Counseling Perspe.docxmattinsonjanel
The Case of Mike
Chapter 5 • Common Theoretical Counseling Perspectives 135
Mike is a 20-year-old male who has just recently been released from jail. Mike is technically on probation for car theft, though he has been involved in crime to a much greater extent. Mike has been identified as a cocaine user and has been suspected, though not convicted, for dealing cocaine. Mike has been tested for drugs by his probation department and was found positive for cocaine. The county has mandated that Mike receive drug counseling but the drug counselor has referred Mike to your office because the drug counselor suspects that Mike has issues beyond simple drug addiction. In fact, the drug counselor’s notes suggest that Mike has Narcissistic personality disorder. Mike seems to have little regard for the feelings of others. Coupled with this is his complete sensitivity to the comments of others. In fact, his prior fiancé has broken off her relationship with him due to what she calls his “constant need for admiration and attention. He is completely self-centered.” After talking with Mike, you quickly find that he has no close friends. As he talks about people who have been close to him, he discounts them for one imperfection or another. These imperfections are all considered severe enough to warrant dismissing the person entirely. Mike makes a point of noting how many have betrayed their loyalty to him or have otherwise failed to give him the credit that he deserves. When asked about getting caught in the auto theft, he remarks that “well my dumb partner got me out of a hot situation by driving me out in a stolen get-a-way car.” (Word on the street has it that Mike was involved in a sour drug deal and was unlikely to have made it out alive if not for his partner.) Mike adds, “you know, I plan everything out perfectly, but you just cannot rely on anybody . . . if you want it done right, do it yourself.” Mike recently has been involved with another woman (unknown to his prior fiancé) who has become pregnant. When she told Mike he said “tough, you can go get an abortionor something, it isn’t like we were in love or something.” Then he laughed at her and toldher to go find some other guy who would shack up with her. Incidentally, Mike is a very attractive man and he likes to point that out on occasion. “Yeah, I was going to be a male model in L. A.,but my agent did not know what he was doing . . . could never get things settled out right . . . so I had to fire him.” Mike is very popular with women and has had a constant string of failed relationships due to what he calls “their inability to keep things exciting.” As Mike puts it “hey, I am too smart for this stuff. These people around me, they don’t deserve the good dummies. But me, well I know how to run things and get over on people. And I am not about to let these dummies get in my way. I got it all figured out . . . see?”
Effective Small Business Management: An Entrepreneurial Approach 9th Edition, 2009 IS ...
THE CHRONICLE OF HIGHER EDUCATIONNovember 8, 2002 -- vol. 49, .docxmattinsonjanel
THE CHRONICLE OF HIGHER EDUCATION
November 8, 2002 -- vol. 49, no. 11, p. B7
The Dangerous Myth of Grade Inflation
By Alfie Kohn
Grade inflation got started ... in the late '60s and early '70s.... The grades that faculty members now give ... deserve to be a scandal.
--Professor Harvey Mansfield, Harvard University, 2001
Grades A and B are sometimes given too readily -- Grade A for work of no very high merit, and Grade B for work not far above mediocrity. ... One of the chief obstacles to raising the standards of the degree is the readiness with which insincere students gain passable grades by sham work.
--Report of the Committee on Raising the Standard, Harvard University, 1894
Complaints about grade inflation have been around for a very long time. Every so often a fresh flurry of publicity pushes the issue to the foreground again, the latest example being a series of articles in The Boston Globe last year that disclosed -- in a tone normally reserved for the discovery of entrenched corruption in state government -- that a lot of students at Harvard were receiving A's and being graduated with honors.
The fact that people were offering the same complaints more than a century ago puts the latest bout of harrumphing in perspective, not unlike those quotations about the disgraceful values of the younger generation that turn out to be hundreds of years old. The long history of indignation also pretty well derails any attempts to place the blame for higher grades on a residue of bleeding-heart liberal professors hired in the '60s. (Unless, of course, there was a similar countercultural phenomenon in the 1860s.)
Yet on campuses across America today, academe's usual requirements for supporting data and reasoned analysis have been suspended for some reason where this issue is concerned. It is largely accepted on faith that grade inflation -- an upward shift in students' grade-point averages without a similar rise in achievement -- exists, and that it is a bad thing. Meanwhile, the truly substantive issues surrounding grades and motivation have been obscured or ignored.
The fact is that it is hard to substantiate even the simple claim that grades have been rising. Depending on the time period we're talking about, that claim may well be false. In their book When Hope and Fear Collide (Jossey-Bass, 1998), Arthur Levine and Jeanette Cureton tell us that more undergraduates in 1993 reported receiving A's (and fewer reported receiving grades of C or below) compared with their counterparts in 1969 and 1976 surveys. Unfortunately, self-reports are notoriously unreliable, and the numbers become even more dubious when only a self-selected, and possibly unrepresentative, segment bothers to return the questionnaires. (One out of three failed to do so in 1993; no information is offered about the return rates in the earlier surveys.)
To get a more accurate picture of whether grades have changed over the years, one needs to look at official student tran ...
The chart is a guide rather than an absolute – feel free to modify.docxmattinsonjanel
The chart is a guide rather than an absolute – feel free to modify or adjust it as need to fit the specific ideas that you are developing.
Area: SALES
Specific Change Plans for Functional Areas
Capability Being Addressed
This can be pulled from the strategic proposal recommended in Part 2B
How do the recommended changes (details provided below) help improve the capability?
This is a logic "double check". Be sure you can show how the changes recommended below improve the capability and help address the product and market focus and add to accomplishment of the value proposition
Details of Specific Changes:
Proposed Changes in Resources
Proposed Changes to Management
Preferences
Proposed Changes to Organizational
Processes
Detailed Change Plans
(Lay out here the specifics of all recommended changes for this area. Modify the layout as necessary to account for the changes being recommended)
Proposed Change
Timing
Costs
On going impact on budget
On going impact on revenue
Wiki
Template
Part-‐2:
Gaps,
Issues
and
New
Strategy
BUSI
4940
–
Business
Policy
1
THE ENVIRONMENT/INDUSTRY
1. Drivers of change
Key drivers of change begin with the availability of substitute products. Many
other
companies can easily provide a substitute and the firm will have to find a way to
stand
out among them. Next would be the ability to differentiate yourself among other
firms
that pose a threat in the industry. Last, the political sector. The the federal, state,
and local governments could all shape the way healthcare is everywhere.
2. Key survival factors
Key survival factors would include making the firm stand out above the rest in the
industry and creating a name for itself. Second would be making sure there is a
broad
network of providers available for the customers. Giving the customer options
will
make the customer happy. Providing excellent customer service is key to any
firm in
the industry.
3. Product/Market and Value Proposition possibilities
Maintaining the use of heavy discounts will keep Careington in the competitive
market. They also concentrate on constantly innovating technology to make
sure that
they have the latest devices to offer their customers. To have high value proposition, Careington
will need to show their costumers that they can believe in them and trust them to
do the right thing. Showing the customers that they can always be on top of the
latest
technology and new age products will help build trust with the customers.
STRATEGY OF THE FIRM
1. Goals
Striving to promote the health and well being of their clients by continuing to
provide
low cost health care solutions. A lot of this concentration is on clients that cannot
afford health care very easily or that a ...
The Challenge of Choosing FoodFor this forum, please read http.docxmattinsonjanel
The Challenge of Choosing Food:
For this forum, please read: https://www.washingtonpost.com/lifestyle/food/no-food-is-healthy-not-even-kale/2016/01/15/4a5c2d24-ba52-11e5-829c-26ffb874a18d_story.html?postshare=3401453180639248&tid=ss_fb-bottom
The article is from the Washington Post, January 17, 2016, by Michael Ruhlmanentitled: "No Food is Healthy, Not even Kale."
Based on your reading in the textbook share the following information with your classmates:
(1) To what degree to you agree with article, "No Food is Healthy, Not even Kale." Do semantics count? Should we focus on foods that are described as nourishing (nutrient-dense) instead of foods described as healthy because the word "healthy" is a "bankrupt" word? Explain and refer to information from the article.
(2) Based on the article and the textbook reading (review pages 9-30), how challenging is it for you to choose nutritious foods that promote health? What factors drive your food choices? Explain to your classmates.
(3) What do you think is the biggest concern we face health-wise in the US today?
(4) What are some obstacles as to why we may not be eating as well as we would like to?
Please complete all questions, if you have any question let me knowv
Test file, (Do not modify it)
// $> javac -cp .:junit-cs211.jar ProperQueueTests.java #compile
// $> java -cp .:junit-cs211.jar ProperQueueTests #run tests
//
// On windows replace : with ; (colon with semicolon)
// $> javac -cp .;junit-cs211.jar ProperQueueTests.java #compile
// $> java -cp .;junit-cs211.jar ProperQueueTests #run tests
import org.junit.*;
import static org.junit.Assert.*;
import java.util.*;
public class ProperQueueTests {
public static void main(String args[]){
org.junit.runner.JUnitCore.main("ProperQueueTests");
}
/*
building queues:
- build small empty queue. (2)
- build larger empty queue. (11)
- build length-zero queue. (0)
*/
@Test(timeout=1000) public void ProperQueue_makeQueue_1(){
String expected = "";
ProperQueue q = new ProperQueue(2);
String actual = q.toString();
assertEquals(2, q.getCapacity());
assertEquals(expected, actual);
}
@Test(timeout=1000) public void ProperQueue_makeQueue_2(){
String expected = "";
ProperQueue q = new ProperQueue(11);
String actual = q.toString();
assertEquals(11, q.getCapacity());
assertEquals(expected, actual);
}
@Test(timeout=1000) public void Queue_makeQueue_3(){
String expected = "";
ProperQueue q = new ProperQueue(0);
String actual = q.toString();
assertEquals(0, q.getCapacity());
assertEquals(expected, actual);
}
/*
add/offer tests.
- add a single value to a short queue.
- fill up a small queue.
- over-add to a queue and witness it struggle.
- add many but don't finish filling a queue.
- make size-zero queue, adds fail, check it's still empty.
*/
@Test(timeout=1000) public void ProperQueue_add_1(){
String expecte ...
The Civil Rights Movement
Dr. James Patterson
Black Civil Rights Movement
Basic denial of civil rights (review)
Segregation in society
Inferior schools
Job discrimination
Political disenfranchisement
Over ½ lived below poverty level
Unemployment double national ave.
Ghettoes: gangs, drugs, substandard housing, crime
Early Victories
WWII egalitarianism and backlash against German racism
Jackie Robinson integrated professional baseball—1947
Desegregation of the armed forces ordered by president Truman—1948
Marian Anderson performed at the New York Metropolitan Opera House—1955
Increased interest in civil rights a result of Cold War propaganda
Brown v. Board of Education
1954 – Topeka, Kansas
Linda Brown: filed suit to attend a neighborhood school
“Separate educational institutions are inherently unequal.”
Overturned Plessy v. Ferguson
Court says: integrate "with all deliberate speed.”
What did this mean?
Linda Brown and Family
Circumvention of Brown v. Board of Education Ruling
White supremacist parents feared racial mixing and attempted to block black enrollment.
Ignored the integration issue
Token integration
Segregation through standardized placement tests
Segregation through private schools
Stalling through legal action
By 1964, 10 years after the Brown case, only 1% of black children attended truly integrated schools.
Little Rock High School
1957 courts order integration in Little Rock
9 black students enrolled.
Governor called out militia to block it.
Mobs replaced militia after recall.
Eisenhower ordered federal troops to protect the students.
Daily harassment
Courageous black students persevered.
Montgomery Bus Boycott
1955--Rosa Parks arrested for not giving up seat to white man
Boycott of bus system led by Martin Luther King, Jr.:
Walking, church busses, car pools, bicycles
Bus lines caught in the middle
Rosa Parks being Booked
Supreme Court ruled bus companies must integrate.
Inspired other protests:
Sit-ins, wade-ins, kneel-ins
Woolworth’s lunch counter
Montgomery Bus Boycott
Martin Luther King, Jr.
Martin Luther King, Jr.
Non-Violent
Influenced by Ghandi
“The blood may flow, but it must be our blood, not that of the white man.”
“Lord, we ain’t what we oughta be. We ain’t what we wanna be. We ain’t what we gonna be. But thank God, we ain’t what we was.”
Freedom Riders
Activists traveled from city to city to ignite the protest.
Bull Conner:
in Montgomery
Dogs
Whips
Water hoses
Cattle prods
Television
Public backlash
Civil Rights March (AL. 1965)
1963 - Washington, D.C. "I have a Dream“—200,000 Attended
Civil Rights Legislation
1964 - Civil Rights Act
1964 - 24th Amendment
Abolished Poll Tax
1965 Voting Rights Act
Affirmative action
Int ...
The Churchill CentreReturn to Full GraphicsThe Churchi.docxmattinsonjanel
The Churchill Centre
Return to Full Graphics
The Churchill Centre | Calendar | Churchill Facts | Speeches & Quotations | Publications and Resources |
News | Join The Centre! | Churchill Stores | Contact Us | Links | Search
Their Finest Hour
Sir Winston Churchill > Speeches & Quotations > Speeches
June 18, 1940
House of Commons
I spoke the other day of the colossal military disaster which occurred when the French High Command
failed to withdraw the northern Armies from Belgium at the moment when they knew that the French front
was decisively broken at Sedan and on the Meuse. This delay entailed the loss of fifteen or sixteen French
divisions and threw out of action for the critical period the whole of the British Expeditionary Force. Our
Army and 120,000 French troops were indeed rescued by the British Navy from Dunkirk but only with the
loss of their cannon, vehicles and modern equipment. This loss inevitably took some weeks to repair, and in
the first two of those weeks the battle in France has been lost. When we consider the heroic resistance
made by the French Army against heavy odds in this battle, the enormous losses inflicted upon the enemy
and the evident exhaustion of the enemy, it may well be the thought that these 25 divisions of the
best-trained and best-equipped troops might have turned the scale. However, General Weygand had to fight
without them. Only three British divisions or their equivalent were able to stand in the line with their French
comrades. They have suffered severely, but they have fought well. We sent every man we could to France
as fast as we could re-equip and transport their formations.
I am not reciting these facts for the purpose of recrimination. That I judge to be utterly futile and even
harmful. We cannot afford it. I recite them in order to explain why it was we did not have, as we could have
had, between twelve and fourteen British divisions fighting in the line in this great battle instead of only
three. Now I put all this aside. I put it on the shelf, from which the historians, when they have time, will
select their documents to tell their stories. We have to think of the future and not of the past. This also
applies in a small way to our own affairs at home. There are many who would hold an inquest in the House
of Commons on the conduct of the Governments-and of Parliaments, for they are in it, too-during the years
which led up to this catastrophe. They seek to indict those who were responsible for the guidance of our
affairs. This also would be a foolish and pernicious process. There are too many in it. Let each man search
his conscience and search his speeches. I frequently search mine.
Of this I am quite sure, that if we open a quarrel between the past and the present, we shall find that we
have lost the future. Therefore, I cannot accept the drawing of any distinctions between Members of the
present Government. It was formed at a moment of crisis in order to unite a ...
The Categorical Imperative (selections taken from The Foundati.docxmattinsonjanel
The Categorical Imperative (selections taken from The Foundations of the Metaphysics of
Morals)
Preface
As my concern here is with moral philosophy, I limit the question suggested to this:
Whether it is not of the utmost necessity to construct a pure thing which is only empirical and
which belongs to anthropology? for that such a philosophy must be possible is evident from the
common idea of duty and of the moral laws. Everyone must admit that if a law is to have moral
force, i.e., to be the basis of an obligation, it must carry with it absolute necessity; that, for
example, the precept, "Thou shalt not lie," is not valid for men alone, as if other rational beings
had no need to observe it; and so with all the other moral laws properly so called; that, therefore,
the basis of obligation must not be sought in the nature of man, or in the circumstances in the
world in which he is placed, but a priori simply in the conception of pure reason; and although
any other precept which is founded on principles of mere experience may be in certain respects
universal, yet in as far as it rests even in the least degree on an empirical basis, perhaps only as to
a motive, such a precept, while it may be a practical rule, can never be called a moral law…
What is the “Good Will?”
NOTHING can possibly be conceived in the world, or even out of it, which can be called
good, without qualification, except a good will. Intelligence, wit, judgement, and the other
talents of the mind, however they may be named, or courage, resolution, perseverance, as
qualities of temperament, are undoubtedly good and desirable in many respects; but these gifts of
nature may also become extremely bad and mischievous if the will which is to make use of them,
and which, therefore, constitutes what is called character, is not good. It is the same with the
gifts of fortune. Power, riches, honour, even health, and the general well-being and contentment
with one's condition which is called happiness, inspire pride, and often presumption, if there is
not a good will to correct the influence of these on the mind, and with this also to rectify the
whole principle of acting and adapt it to its end. The sight of a being who is not adorned with a
single feature of a pure and good will, enjoying unbroken prosperity, can never give pleasure to
an impartial rational spectator. Thus a good will appears to constitute the indispensable condition
even of being worthy of happiness.
There are even some qualities which are of service to this good will itself and may
facilitate its action, yet which have no intrinsic unconditional value, but always presuppose a
good will, and this qualifies the esteem that we justly have for them and does not permit us to
regard them as absolutely good. Moderation in the affections and passions, self-control, and calm
deliberation are not only good in many respects, but even seem to constitute part of th ...
The cave represents how we are trained to think, fell or act accor.docxmattinsonjanel
The cave represents how we are trained to think, fell or act according to society, following our own way and not the way intended for us. The shadows are merely a reflection of what they perceived to be reality instead of an illusion. The prisoners are trapped in society, each one of us who choose to stay trapped in our own way. The man that escapes is the person who no longer is a slave to society and can see the difference between reality and illusion. The day light can be compared to God’s will. When you don’t follow the plan that has been laid out for you by God, than you are trapped and you will only see illusions or reflections of reality. Escaping and choosing to go into “the light,” or following the will of God, only then can you be set free from your prison.
When looking at a piece of art, a painting, for example, at first glance the painting can appear to be something other what it is intended to be (reality). This reminds me of those pictures that everyone sees on social media, the picture that has circles all over it. When you look at the picture it appears that the circles are moving, but in reality the circles do not move at all. So art can more or less be perceived as more of an illusion.
An example of the picture can be seen here http://www.dailyhaha.com/_pics/movie_circles_illusion.jpg
Accepting illusion as reality happens a lot more times than we probably think. Anything that we see on T.V., Social Media, internet, or even dating, can all be perceived as an illusion at some point. Take dating for example; how a person acts on a date is most likely not how they would act to someone they have known for a while (illusion). Not all people pretend to be something different but in many cases they do. Recognizing what you failed to see after the initial first date and thereafter is how you would know what you first seen was just simply an illusion and therefore not reality, unless of course in reality they are simply a fake person I suppose. Following this pattern makes you realize most people do not appear to be who they are. A good “first impression” doesn’t necessarily mean much when thinking about illusions vs reality, because that’s all the “first impression” is in fact more or less an illusion.
People live in shadows because they fail to recognize reality and choose to continue to believe in illusions. With the growth of Social media, more and more people are falling victim to what things appear to be and will stay in the dark (cave). We as a society are imprisoned by what we see and read through news channels and social media. We will believe anything that comes across CNN or any news station (not fox news though) and let them make up our mind for us. People comment on any shooting victims and assume the cop was in the wrong and is racist, in reality that is not always the case.
It’s interesting to think in terms of appearance vs reality when viewing not only art, but the world. Not taking things for what they appear to ...
The Case Superior Foods Corporation Faces a ChallengeOn his way.docxmattinsonjanel
The Case: Superior Foods Corporation Faces a Challenge
On his way to the plant office, Jason Starnes passed by the production line where hundreds of gloved, uniformed workers were packing sausages and processed meats for shipment to grocery stores around the world.
Jason's company, Superior Foods Corporation, based in Wichita, Kansas, employed 30,000 people in eight countries and had beef and pork processing plants in Arkansas, California, Milwaukee, and Nebraska City. Since a landmark United States–Japan trade agreement signed in 1988, markets had opened up for major exports of American beef, now representing 10 percent of U.S. production. Products called “variety meats”—including intestines, hearts, brains, and tongues—were very much in demand for export to international markets.
Jason was in Nebraska City to talk with the plant manager, Ben Schroeder, about the U.S. outbreak of bovine spongiform encephalopathy (mad cow disease) and its impact on the plant. On December 23, 2011, the U.S. Department of Agriculture had announced that bovine spongiform encephalopathy had been discovered in a Holstein cow in Washington State. The global reaction was swift: Seven countries imposed either total or partial bans on the importation of U.S. beef, and thousands of people were chatting about it on blogs and social networking sites. Superior had moved quickly to intercept a container load of frozen Asian-bound beef from its shipping port in Los Angeles, and all other shipments were on hold.
After walking into Ben's office, Jason sat down across from him and said, “Ben, your plant has been a top producer of variety meats for Superior, and we have appreciated all your hard work out here. Unfortunately, it looks like we need to limit production for a while—at least three months, or until the bans get relaxed. I know Senator Nelson is working hard to get the bans lifted. In the meantime, we need to shut down production and lay off about 25 percent of your workers. I know it is going to be difficult, and I'm hoping we can work out a way to communicate this to your employees.”
...
The Case You can choose to discuss relativism in view of one .docxmattinsonjanel
The Case:
You can choose to discuss relativism in view of one of the following two cases:
The Case:
· Start by giving a brief explanation of relativism (200 words).
· what is the difference between ethical & cultural relativism. Then discuss, in view of relativism, how we can reconcile the apparent conflict between the need for enforcement of human rights standards with the need for protection of cultural diversity. (400 words).
...
The Case Study of Jim, Week Six The body or text (i.e., not rest.docxmattinsonjanel
The Case Study of Jim, Week Six
The body or text (i.e., not restating the question in your answer, not including your references or your signature) of your initial response should be at least 300 words of text to be considered substantive. You will see a red U for initial responses that are not at least 300 words. Note: your initial response to this required discussion will not count toward participation
The Case Study of Jim, Week 6
Title of Activity: In class discussion of the case study of Jim, Week Six
Objective: Review the concepts of the case study in Ch.13 of Personality and then relate Jim’s case to the theorists discussed during the week. In addition, summarize the entire case study.
1. Read “The Case of Jim” in Ch. 13 of Personality.
2. Discuss the case. This week, discussion should focus on social-cognitive theory.
3. Provide a summary of the entire case.
THE CASE OF JIM Twenty years ago Jim was assessed from various theoretical points of view: psychoanalytic, phenomenological, personal construct, and trait.
At the time, social-cognitive theory was just beginning to evolve, and thus he was not considered from this standpoint. Later, however, it was possible to gather at least some data from this theoretical standpoint as well. Although comparisons with earlier data may be problematic because of the time lapse, we can gain at least some insight into Jim’s personality from this theoretical point of view. We do so by considering
Jim’s goals, reinforcers he experiences, and his self-efficacy beliefs.
Jim was asked about his goals for the immediate future and for the long-range future. He felt that his immediate and long-term goals were pretty much the same: (1) getting to know his son and being a good parent, (2) becoming more accepting and less critical of his wife and others, and (3) feeling good about his professional work as a consultant.
Generally he feels that there is a good chance of achieving these goals but is guarded in that estimate, with some uncertainty about just how much he will be able to “get out of myself” and thereby be more able to give to his wife and child.
Jim also was asked about positive and aversive reinforcers, things that were important to him that he found rewarding or unpleasant.
Concerning positive reinforcers, Jim reported that money was “a biggie.”
In addition he emphasized time with loved ones, the glamour of going to an opening night, and generally going to the theater or movies.
He had a difficult time thinking of aversive reinforcers. He described writing as a struggle and then noted, “I’m having trouble with this.”
Jim also discussed another social-cognitive variable: his competencies or skills (both intellectual and social). He reported that he considered himself to be very bright and functioning at a very high intellectual level. He felt that he writes well from the standpoint of a clear, organized presentation, but he had not written anything that is innovative or creative. Ji ...
The Case of Missing Boots Made in ItalyYou can lead a shipper to.docxmattinsonjanel
The Case of Missing Boots Made in Italy
You can lead a shipper to the water, but if the horse does not want to drink…
Vocabulary:
Shipper: In commercial trade, the person who gives goods to a shipping company to be transported to a foreign destination; in export transactions, it is usually the exporter. Do not confuse the shipper with the shipping company or carrier.
Consignee: The person who is ultimately receiving the goods, generally the buyer or importer. Sometimes these people will designate a “notify party” to be notified when the goods arrive in the port of entry, so that customs clearance can be arranged and the goods picked up for further domestic transport.
Carrier: A company that transports goods (sometimes referred to as a “shipping company” or a “freight company”).
Forwarder (or “freight forwarder”): A forwarder is like a travel agent for cargo – forwarders organize the transport of your goods from departure to destination, and charge a fee for their services. There are many different kinds of forwarders. There are firms that act as both forwarders and carriers. Sometimes forwarders will have relationships with a whole string of carriers and other forwarders, so that the shipper only deals with the forwarder but in the end the goods are actually carrier by a series of independent transport companies.
NVOCC: Non-vessel operating common carrier. A “common carrier” in the legal terminology refers to a carrier who has accepted the additional legal burdens imposed on a company that regularly carries goods for a fee (as opposed to someone with a truck who might agree to help you out just this once because you’re in trouble).
Container: Large standard-sized metal boxes for transporting merchandise; you see them on the back of trucks, or stacked up outside of ports like Lego toys, or on top of large ocean-going container ships. The capacity of container vessels is measured in TEU (twenty-foot equivalent units; containers generally measure 20 or 40 feet long; large vessels can now carry in excess of 4,000 TEU). There are different kinds of containers for different purposes. For example, refrigerated containers (for transporting meat or fruit, for example) are called “reefers,” so be careful where you use this term.
Consolidator: When large companies ship a lot of goods, they are usually able to fill entire containers. However, shippers who ship smaller amounts (like the shipper in the example below), often have their goods “stuffed” (the industry term) along with other goods into the same container; hence, they are “consolidated.” Some firms specialize in consolidating various shipments from different shippers, these are “consolidators.” A load which requires consolidation is a “LCL” or less-than-full-container load, as opposed to a “FCL” – full-container-load.
Marine Insurance: This is a common term for cargo insurance for international shipments, even in cases where much of the transport is NOT by sea; “marine insurance ...
The Cardiovascular SystemNSCI281 Version 51University of .docxmattinsonjanel
The Cardiovascular System
NSCI/281 Version 5
1
University of Phoenix Material
The Cardiovascular System
Exercise 9.6: Cardiovascular System—Thorax, Arteries, Anterior View
Layer 1 (p. 470)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
Layer 2 (p. 470)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
Layer 3 (p. 471)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
Layer 4 (pp. 471-472)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
L. .
M. .
N. .
O. .
P. .
Q. .
R. .
S. .
T. .
U. .
V. .
W. .
Layer 5 (p. 472)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
Layer 6 (pp. 472-473)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
L. .
M. .
N. .
O. .
P. .
Q. .
R. .
S. .
T. .
U. .
V. .
W. .
X. .
Exercise 9.7a: Imaging—Aortic Arch
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
L. .
M. .
N. .
O. .
P. .
Q. .
R. .
S. .
T. .
U. .
V. .
W. .
X. .
Y. .
Z. .
AA. .
Exercise 9.7b: Imaging—Aortic Arch
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
L. .
M. .
N. .
O. .
P. .
Q. .
R. .
S. .
T. .
U. .
Exercise 9.8: Cardiovascular System—Thorax, Veins, Anterior View
Layer 2 (pp. 474-475)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
Layer 3 (p. 475)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
Layer 4 (p. 476)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
L. .
M. .
N. .
O. .
P. .
Q. .
R. .
S. .
T. .
U. .
V. .
W. .
Layer 5 (pp. 476-477)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
Layer 6 (p. 476)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
L. .
M. .
N. .
O. .
P. .
Q. .
R. .
S. .
Animation: Pulmonary and Systemic Circulation
After viewing the animation, answer these questions:
1. Name the two divisions of the cardiovascular system.
2. What are the destinations of these two circuits?
3. In the systemic circulation, where does gas exchange occur?
4. In the pulmonary circulation, where does gas exchange occur?
5. Name the blood vessels that carry oxygen-rich blood to the heart. How many are there? Where do they terminate?
Exercise 9.9: Imaging—Thorax
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
In Review
1. What is the name for the fibrous sac that encloses the heart?
2. Name the lymphatic organ that is large in children but atrophies during adolescence.
3. Name the bilobed endocrine gland located lateral to the trachea and larynx.
4. How do large arteries supply blood to body structures?
5. Name the large vessel that conveys oxygen-poor blood from the right ventricle of the heart.
6. Name the two branches of the blood vessel mentioned in question 5 that convey oxygen-poor blood to the lungs.
7. Name the blunt tip of the left ventricle.
8. What is the carotid sheath? What structures are found within it?
9. What is the serous pericardium?
10. Name the structure that ...
The Cardiovascular SystemNSCI281 Version 55University of .docxmattinsonjanel
The Cardiovascular System
NSCI/281 Version 5
5
University of Phoenix Material
The Cardiovascular System
Exercise 9.6: Cardiovascular System—Thorax, Arteries, Anterior View
Layer 1 (p. 470)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
Layer 2 (p. 470)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
Layer 3 (p. 471)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
Layer 4 (pp. 471-472)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
L. .
M. .
N. .
O. .
P. .
Q. .
R. .
S. .
T. .
U. .
V. .
W. .
Layer 5 (p. 472)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
Layer 6 (pp. 472-473)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
L. .
M. .
N. .
O. .
P. .
Q. .
R. .
S. .
T. .
U. .
V. .
W. .
X. .
Exercise 9.7a: Imaging—Aortic Arch
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
L. .
M. .
N. .
O. .
P. .
Q. .
R. .
S. .
T. .
U. .
V. .
W. .
X. .
Y. .
Z. .
AA. .
Exercise 9.7b: Imaging—Aortic Arch
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
L. .
M. .
N. .
O. .
P. .
Q. .
R. .
S. .
T. .
U. .
Exercise 9.8: Cardiovascular System—Thorax, Veins, Anterior View
Layer 2 (pp. 474-475)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
Layer 3 (p. 475)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
Layer 4 (p. 476)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
L. .
M. .
N. .
O. .
P. .
Q. .
R. .
S. .
T. .
U. .
V. .
W. .
Layer 5 (pp. 476-477)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
Layer 6 (p. 476)
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
L. .
M. .
N. .
O. .
P. .
Q. .
R. .
S. .
Animation: Pulmonary and Systemic Circulation
After viewing the animation, answer these questions:
1. Name the two divisions of the cardiovascular system.
2. What are the destinations of these two circuits?
3. In the systemic circulation, where does gas exchange occur?
4. In the pulmonary circulation, where does gas exchange occur?
5. Name the blood vessels that carry oxygen-rich blood to the heart. How many are there? Where do they terminate?
Exercise 9.9: Imaging—Thorax
A. .
B. .
C. .
D. .
E. .
F. .
G. .
H. .
I. .
J. .
K. .
In Review
1. What is the name for the fibrous sac that encloses the heart?
2. Name the lymphatic organ that is large in children but atrophies during adolescence.
3. Name the bilobed endocrine gland located lateral to the trachea and larynx.
4. How do large arteries supply blood to body structures?
5. Name the large vessel that conveys oxygen-poor blood from the right ventricle of the heart.
6. Name the two branches of the blood vessel mentioned in question 5 that convey oxygen-poor blood to the lungs.
7. Name the blunt tip of the left ventricle.
8. What is the carotid sheath? What structures are found within it?
9. What is the serous pericardium?
10. Name the structure that ...
The British Airways Swipe Card Debacle case study;On Friday, Jul.docxmattinsonjanel
The British Airways Swipe Card Debacle case study;
On Friday, July 18, 2003, British Airways staff in Terminals 1 and 4 at London’s busy Heathrow Airport held a 24 hour wildcat strike. The strike was not officially sanctioned by the trade unions but was spontaneous action by over 250 check in staff who walked out at 4 pm. The wildcat strike occurred at the start of a peak holiday season weekend which led to chaotic scenes at Heathrow. Some 60 departure flights were grounded and over 10,000 passengers left stranded. The situation was heralded as the worst industrial situation BA had faced since 1997 when a strike was called by its cabin crew. BA response was to cancel its services from both terminals, apologize for the disruption and ask those who were due to fly not to go to the airport as they would be unable to service them. BA also set up a tent outside Heathrow to provide refreshments and police were called in to manage the crow. BA was criticized by many American visitors who were trying to fly back to the US for not providing them with sufficient information about what was going on. Staff returned to work on Saturday evening but the effects of the strike flowed on through the weekend. By Monday morning July 21, BA reported that Heathrow was still extremely busy. There is still a large backlog of more than 1000 passengers from services cancelled over the weekend. We are doing everything we can to get these passengers away in the next couple of days. As a result of the strike BA lost around 40 million and its reputation was severely dented. The strike also came at a time when BA was still recovering from other environmental jolts such as 9/11 the Iraqi war, SARS, and inroads on its markets from budget airlines. Afterwards BA revealed that it lost over 100,000 customers a result of the dispute.
BA staff were protesting the introduction of a system for electronic clocking in that would record when they started and finished work for the day. Staff were concerned that the system would enable managers to manipulate their working patterns and shift hours. The clocking in system was one small part of a broader restructuring program in BA, titled the Future Size and Shape recovery program. Over the previous two years this had led to approximately 13,000 or almost one in four jobs, being cut within the airline. As The Economist noted, the side effects of these cuts were emerging with delayed departures resulting from a shortage of ground staff at Gatwick and a high rate of sickness causing the airline to hire in aircraft and crew to fill gaps. Rising absenteeism is a sure sign of stress in an organization that is contracting. For BA management introduction of the swipe card system was a way of modernizing BA and improving the efficient use of staff and resources. As one BA official was quoted as saying We needed to simplify things and bring in the best system to manage people. For staff it was seen as a prelude to a radical shakeup in working ...
The Case Abstract Accuracy International (AI) is a s.docxmattinsonjanel
The Case
Abstract
Accuracy International (AI) is a specialist British firearms manufacturer based in Portsmouth,
Hampshire, England and best known for producing the Accuracy International Arctic Warfare
series of precision sniper rifles. The company was established in 1978 by British Olympic shooting
gold medallist Malcolm Cooper, MBE (1947–2001), Sarah Cooper, Martin Kay, and the designers
of the weapons, Dave Walls and Dave Craig. All were highly skilled international or national target
shooters. Accuracy International's high-accuracy sniper rifles are in use with many military units
and police departments around the world. Accuracy International went into liquidation in 2005, and
was bought by a British consortium including the original design team of Dave Walls and Dave
Craig.
Earlier this year, AI's computer network was hit by a data stealing malware which cost thousands of
pounds to recover from. Also last year there have been a couple of incidents of industrial
espionage, involving staff who were later sacked and prosecuted.
As part of an ongoing covert investigation, the head of Security at AI (DG) has hired you to
conduct a forensic investigation on an image of a USB device. The USB device, it is a non-
company issued device, allegedly belonging to an employee Christian Macleod, a consultant and
technical manager at AI for more than six years.
Case details
Christian’s manager, David Bolton, is the regional manager and head of R&D and has been
working at AI for the last three years. David initiated this fact finding covert investigation which is
conducted with the support of the head of Security at AI.
The USB device in question allegedly was removed from Christian's workstation at AI while he
was out of the office for lunch, the device was imaged and then it was plugged in back into
Christian's workstation. You have been provided with a copy of that image (the original copy is at
the moment secure in a secure locker at the security department).
You have been told by DG that Dave was alarmed by some of the work practices of Christian and
that prompted him to start this investigation by contacting the Head of Security at AI. According to
Dave, Christian would bring in devices such as his iPod and his iPhone and he would often plug
these into his workstation. There is no policy against personal music devices and there is no
BYOD policy but there is a strict policy against copying corporate data is any personal device. The
company's policy states that such data is not to be stored unencrypted, on unauthorised, non
company approved devices. According to DG, Dave has reasons to believe that an earlier malware
infection incident at AI had its origins in one of Christian's personal devices.
Supporting information
1. You need to be aware that Dave and Christian do not get along as they had a few verbal exchanges
in the last year. Christian has filled in a ...
CapTechTalks Webinar Slides June 2024 Donovan Wright.pptxCapitolTechU
Slides from a Capitol Technology University webinar held June 20, 2024. The webinar featured Dr. Donovan Wright, presenting on the Department of Defense Digital Transformation.
How to Setup Default Value for a Field in Odoo 17Celine George
In Odoo, we can set a default value for a field during the creation of a record for a model. We have many methods in odoo for setting a default value to the field.
How to Manage Reception Report in Odoo 17Celine George
A business may deal with both sales and purchases occasionally. They buy things from vendors and then sell them to their customers. Such dealings can be confusing at times. Because multiple clients may inquire about the same product at the same time, after purchasing those products, customers must be assigned to them. Odoo has a tool called Reception Report that can be used to complete this assignment. By enabling this, a reception report comes automatically after confirming a receipt, from which we can assign products to orders.
🔥🔥🔥🔥🔥🔥🔥🔥🔥
إضغ بين إيديكم من أقوى الملازم التي صممتها
ملزمة تشريح الجهاز الهيكلي (نظري 3)
💀💀💀💀💀💀💀💀💀💀
تتميز هذهِ الملزمة بعِدة مُميزات :
1- مُترجمة ترجمة تُناسب جميع المستويات
2- تحتوي على 78 رسم توضيحي لكل كلمة موجودة بالملزمة (لكل كلمة !!!!)
#فهم_ماكو_درخ
3- دقة الكتابة والصور عالية جداً جداً جداً
4- هُنالك بعض المعلومات تم توضيحها بشكل تفصيلي جداً (تُعتبر لدى الطالب أو الطالبة بإنها معلومات مُبهمة ومع ذلك تم توضيح هذهِ المعلومات المُبهمة بشكل تفصيلي جداً
5- الملزمة تشرح نفسها ب نفسها بس تكلك تعال اقراني
6- تحتوي الملزمة في اول سلايد على خارطة تتضمن جميع تفرُعات معلومات الجهاز الهيكلي المذكورة في هذهِ الملزمة
واخيراً هذهِ الملزمة حلالٌ عليكم وإتمنى منكم إن تدعولي بالخير والصحة والعافية فقط
كل التوفيق زملائي وزميلاتي ، زميلكم محمد الذهبي 💊💊
🔥🔥🔥🔥🔥🔥🔥🔥🔥
Accounting for Restricted Grants When and How To Record Properly
The average weekly earnings of google shares are $1.005 per share .docx
1. The average weekly earnings of google shares are $1.005 per
share with a standard deviation of $0.045. The distribution of
returns is more or less symmetric but have high peak. Normality
test (Jarque Bera Test) rejects hypothesis of normality for
earnings data at 5% level.
Jarque Bera Test
data: returns
X-squared = 54.1642, df = 2, p-value = 1.731e-12
The distribution of log(returns) is also not closer to normal
distribution, with symmetry close to 0 but kurtosis statistics is
1.5 which is far from normal distribution statistic. The Jarque-
Bera test is significant (p-value < 0.05) indicating that
hypothesis of normal distribution for log(returns) can be
rejected.
Jarque Bera Test
data: rets
X-squared = 49.4632, df = 2, p-value = 1.816e-11
The following plot is time series plot of weekly price of google
shares which shows increasing trend.
Time series Plot of Google Prices
Time plot of returns show that google stock returns shows
periods of high volatility at various times. Most returns are
between +/- 10%. Sample moments show that distribution of log
returns is somewhat symmetric with very thin tails (kurtosis =
1.56).
To check null hypothesis of non stationarity:
Dickey Fuller Test is expressed as H0: φ1=1 vs Ha: φ1<>1
2. where the null hypothesis indicates unit-root non-stationarity.
The Dickey-Fuller test shows that the earnings time series is
unit-root stationary. The test p-values for lags 7 is less than
0.05, therefore the null hypothesis of non stationarity can be
rejected (small p-values< 0.05).
Augmented Dickey-Fuller Test
data: rets
Dickey-Fuller = -7.1264, Lag order = 7, p-value = 0.01
alternative hypothesis: stationary
To check serial correlations in the log returns:
Log returns are not serially correlated as shown by the Ljung-
Box test with p-values > 0.05 and the autocorrelation plot.
Box-Pierce test
data: coredata(rets)
X-squared = 0.686, df = 1, p-value = 0.4075
To look for evidence of ARCH effects in the log returns:
The analysis below shows a strong ARCH effect. The squared
returns are strongly correlated. The Ljung-Box tests on squared
residuals are highly significant with p-values < 0.005, and
autocorrelation plots shows large autocorrelations for the first
15 lags.
Box-Pierce test
data: coredata(rets^2)
X-squared = 8.1483, df = 1, p-value = 0.00431
Plot of PACF: There is no correlation till lag 10 which shows
there is no AR lag.
To fit an ARMA(0,0)-GARCH(2,1) model for the log returns
using a normal- distribution for the error terms:
Fitted Model:
3. Residual Analysis:
Adjusted Pearson Goodness-of-Fit Test:
------------------------------------
group statistic p-value(g-1)
1 20 28.78 0.06948
2 30 38.57 0.11019
3 40 46.39 0.19380
4 50 51.15 0.38929
Above output shows that error terms are normally distributed as
all the P values are greater than 0.05.
To fit ARMA(0,0)-eGARCH(1,1) model with Gaussian
distribution:
Fitted Model is:
Residual Analysis:
Adjusted Pearson Goodness-of-Fit Test:
------------------------------------
group statistic p-value(g-1)
1 20 28.95 0.06673
2 30 27.82 0.52739
3 40 46.91 0.18000
4 50 50.72 0.40546
Above output shows that error terms are normally distributed as
all the P values are greater than 0.05.
To fit ARMA(0,0)-TGARCH(2,1) model with norm-distribution:
Fitted Model:
Residual Analysis:
4. Adjusted Pearson Goodness-of-Fit Test:
------------------------------------
group statistic p-value(g-1)
1 20 18.29 0.5030
2 30 31.28 0.3525
3 40 29.17 0.8742
4 50 51.36 0.3813
Above output shows that error terms are normally distributed as
all the P values are greater than 0.05.
BEST FIT FOR THE MODEL:
garch21 egarch11 gjrgarch21
Akaike -3.436225 -3.451379 -3.463469
Bayes -3.391975 -3.407129 -3.401519
Shibata -3.436449 -3.451603 -3.463905
Hannan-Quinn -3.418814 -3.433968 -3.439094
From above output it can be said that AR(0)-EGARCH(1,1) and
AR(0) – GJR(1,1) both have almost equal (smallest) values for
all best fit criterion and hence can be considered as best model.
Forecast Analysis:
RE-FIT MODELS LEAVING 100 OUT-OF-SAMPLE
OBSERVATIONS FOR FORECAST and EVALUATE
STATISTICS for 5 Step Ahead Forecast.
Model
garch21
egarch11
tgarch21
MSE
0.0009497851
0.0009497741
0.0009499554
MAE
0.0224897000
0.0225260500
0.0225533000
5. DAC
0.6000000000
0.6000000000
0.6000000000
Since MSE is smallest for GJRGARCH model. We should
consider this model for forecasting.
CODE:
library(e1071)
library(tseries)
library(xts)
library(rugarch)
price = scan("clipboard")
pricets=ts(price,frequency=52,start=c(2004,45))
returns=pricets/lag(pricets, -1)
summary(price)
skewness(price)
kurtosis(price)
sd(price)
summary(returns)
skewness(returns)
kurtosis(returns)
sd(returns)
#Normality check
hist(returns, prob=TRUE)
lines(density(returns, adjust=2), type="l")
jarque.bera.test(returns)
#log return time series
rets = log(pricets/lag(pricets, -1))
#Normality check
hist(rets, prob=TRUE)
lines(density(rets, adjust=2), type="l")
6. jarque.bera.test(rets)
plot.ts(price.ts)
# strip off the dates and just create a simple numeric object
(require:xts)
ret = coredata(rets);
# creates time plot of log returns
plot(rets)
summary(rets)
skewness(rets)
kurtosis(rets)
sd(returns)
#ADF test for checking null hypothesis of non stationarity
adf.test(rets)
# Computes Ljung-Box test on returns
Box.test( coredata(rets))
# Computes Ljung-Box test on squared returns to test non-linear
independence
Box.test(coredata(rets^2))
# Computes Ljung-Box test on absolute returns to test non-
linear independence
Box.test(abs(coredata(rets)))
par(mfrow=c(3,1))
# Plots ACF function of vector data
acf(ret)
# Plot ACF of squared returns to check for ARCH effect
acf(ret^2)
# Plot ACF of absolute returns to check for ARCH effect
acf(abs(ret))
#plot returns, square returns and abs(returns)
par(mfrow=c(3,1))
# Plots ACF function of vector data
plot(rets, type='l')
7. # Plot ACF of squared returns to check for ARCH effect
plot(rets^2,type='l')
# Plot ACF of absolute returns to check for ARCH effect
plot(abs(rets),type='l')
par(mfrow=c(1,1))
# plots PACF of squared returns to identify order of AR model
pacf(coredata(rets),lag=10)
#specify model using functions in rugarch package
#Fit ARMA(0,0)-GARCH(2,1) model
garch21.spec=ugarchspec(variance.model=list(garchOrder=c(2,1
)), mean.model=list(armaOrder=c(0,0)))
#estimate model
garch21.fit=ugarchfit(spec=garch21.spec, data=rets)
garch21.fit
#Fit ARMA(0,0)-eGARCH(1,1) model with Gaussian
distribution
egarch11.spec=ugarchspec(variance.model=list(model =
"eGARCH",
garchOrder=c(1,1)), mean.model=list(armaOrder=c(0,0)))
#estimate model
egarch11.fit=ugarchfit(spec=egarch11.spec, data=ret)
egarch11.fit
#Fit ARMA(0,0)-TGARCH(2,1) model with norm-distribution
gjrgarch21.spec=ugarchspec(variance.model=list(model =
"gjrGARCH",
garchOrder=c(2,1)), mean.model=list(armaOrder=c(0,0)),
distribution.model = "norm")
#estimate model
gjrgarch21.fit=ugarchfit(spec=gjrgarch21.spec, data=ret)
gjrgarch21.fit
9. nobs 469.000000
NAs 0.000000
Minimum -0.166652
Maximum 0.164808
1. Quartile -0.022391
3. Quartile 0.029000
Mean 0.003843
Median 0.005614
Sum 1.802469
SE Mean 0.002079
LCL Mean -0.000242
UCL Mean 0.007929
Variance 0.002027
Stdev 0.045027
Skewness -0.040829
Kurtosis 1.569306
Title: Jarque - Bera Normalality TestTest Results: STATISTIC:
X-squared: 49.4632 P VALUE: Asymptotic p Value: 1.816e-
11
The distribution of log(returns) is not normal distribution, with
symmetry close to 0 but kurtosis statistics is 1.5 which is far
from normal distribution statistic. The Jarque-Bera test is
significant (p-value < 0.05) indicating that hypothesis of
normal distribution for log(returns) can be rejected.
The following plot is time series plot of weekly price of Google
shares which shows increasing trend.
(
Time series Plot of Google Prices
)
Time plot of returns show that google stock returns shows
10. periods of high volatility at various times. Most returns are
between +/- 10%. Sample moments show that distribution of log
returns is somewhat symmetric with very thin tails (kurtosis =
1.56). There is various times where the returns produce +/-15%
with higher volatility also. Volatility doesn’t seem to die down
quickly and it continues for a while after a volatile period.
To check null hypothesis of non stationarity:
Dickey Fuller Test is expressed as H0: φ1=1 vs Ha: φ1<>1
where the null hypothesis indicates unit-root non-stationarity.
The Dickey-Fuller test shows that the earnings time series is
unit-root stationary. The test p-values for lags 7 is less than
0.05, therefore the null hypothesis of non stationarity can be
rejected (small p-values< 0.05).
Augmented Dickey-Fuller TestTest Results: PARAMETER:
Lag Order: 7 STATISTIC: Dickey-Fuller: -7.1331 P
VALUE: 0.01
alternative hypothesis: stationary
ACF TESTS
The ACF plots below show the Google stock returns are not
correlated indicating a constant mean model for rt. Both the
squared returns time series and the absolute time series show
large autocorrelations. We can conclude that the log returns
process has a strong non-linear dependence.
Ljung Box testing for ARCH effects for Lags of 6-12-18 for
squared returns and absolute returns
Box-Ljung testdata: coredata(rets^2)X-squared = 48.3339, df =
6, p-value = 1.013e-08Box-Ljung testdata: coredata(rets^2)X-
squared = 71.9976, df = 12, p-value = 1.352e-10Box-Ljung
testdata: coredata(rets^2)X-squared = 98.7945, df = 18, p-value
= 3.68e-13
11. The Ljung Box tests confirm that the Ljung Box tests on the
squared returns are autocorrelated. The analysis shows a strong
ARCH effect. The squared returns are strongly correlated. The
Ljung-Box tests on squared residuals are highly significant with
p-values < 0.005, and autocorrelation plots shows large
autocorrelations for the first 18 lags.
Plot of PACF: There is no correlation till lag 13 which shows
there is no AR lag.
MODEL FITTING
*---------------------------------** GARCH Model Fit
**---------------------------------*Conditional Variance Dynamics
-----------------------------------GARCH Model:
sGARCH(1,1)Mean Model: ARFIMA(0,0,0)Distribution: std
Optimal Parameters------------------------------------
Estimate Std. Error t value Pr(>|t|)mu 0.004432 0.001778
2.4924 0.012689omega 0.000103 0.000053 1.9574
0.050295alpha1 0.094169 0.032716 2.8784 0.003997beta1
0.856326 0.044635 19.1850 0.000000shape 6.797338
2.000925 3.3971 0.000681
To fit an ARMA(0,0)-GARCH(2,1) model for the log returns
using a normal distribution for the error terms:
Fitted Model:
with 7 degrees of freedom
Residual AnalysisQ-Statistics on Standardized Residuals---------
--------------------------- statistic p-valueLag[1]
12. 0.0339 0.8539Lag[p+q+1][1] 0.0339 0.8539Lag[p+q+5][5]
2.4766 0.7800d.o.f=0H0 : No serial correlationQ-Statistics on
Standardized Squared Residuals------------------------------------
statistic p-valueLag[1] 0.1274 0.7212Lag[p+q+1][3]
1.4245 0.2327Lag[p+q+5][7] 3.4346 0.6333d.o.f=2
Above output shows there is no evidence of autocorrelation in
residuals. They behave like white noise. Also, there is no
evidence of serial correlation in squared residuals. They also
behave like white noise. Adjusted Pearson Goodness-of-Fit
Test:------------------------------------ group statistic p-value(g-
1)1 20 8465 02 30 13148 03 40
17834 04 50 22521 0
Test for Goodness of fit. Normal Distribution rejected.
To fit ARMA(0,0)-eGARCH(1,1) model with Gaussian
distribution:
Conditional Variance Dynamics -----------------------------------
GARCH Model: eGARCH(1,1)Mean Model:
ARFIMA(0,0,0)Distribution: norm Optimal Parameters-----------
------------------------- Estimate Std. Error t value
Pr(>|t|)mu 0.004814 0.001837 2.6205 0.008780omega -
0.427600 0.165861 -2.5781 0.009936alpha1 -0.086690
0.033886 -2.5583 0.010518beta1 0.931657 0.026273
35.4604 0.000000gamma1 0.177187 0.048248 3.6724
0.000240
Fitted Model is:
Residual Analysis:Q-Statistics on Standardized Residuals--------
---------------------------- statistic p-valueLag[1]
0.03655 0.8484Lag[p+q+1][1] 0.03655 0.8484Lag[p+q+5][5]
2.73813 0.7403d.o.f=0H0 : No serial correlationQ-Statistics on
Standardized Squared Residuals------------------------------------
statistic p-valueLag[1] 0.2255 0.6348Lag[p+q+1][3]
13. 2.6812 0.1015Lag[p+q+5][7] 4.1362 0.5300d.o.f=2ARCH
LM Tests------------------------------------ Statistic DoF
P-ValueARCH Lag[2] 2.224 2 0.3290ARCH Lag[5]
4.682 5 0.4559ARCH Lag[10] 7.053 10 0.7204
Above output shows there is no evidence of autocorrelation in
residuals. They behave like white noise. Also, there is no
evidence of serial correlation in squared residuals. They also
behave like white noise. Adjusted Pearson Goodness-of-Fit
Test:------------------------------------ group statistic p-value(g-
1)1 20 26.65 0.11312 30 32.04 0.31793 40
43.67 0.27984 50 47.74 0.5243
Above output shows that error terms are normally distributed as
all the P values are greater than 0.05.
To fit ARMA(0,0)-tGARCH(1,1) model with Gaussian
distribution:Conditional Variance Dynamics -----------------------
------------GARCH Model: gjrGARCH(2,1)Mean Model:
ARFIMA(0,0,0)Distribution: norm Optimal Parameters-----------
------------------------- Estimate Std. Error t value
Pr(>|t|)mu 0.004928 0.001860 2.649926 0.008051omega
0.000120 0.000053 2.257334 0.023987alpha1 0.000000
0.000877 0.000011 0.999991alpha2 0.072854 0.037335
1.951388 0.051011beta1 0.828559 0.053752 15.414339
0.000000gamma1 0.397469 0.144242 2.755564
0.005859gamma2 -0.315459 0.131445 -2.399922 0.016399
Fitted Model is:
Residual Analysis:Q-Statistics on Standardized Residuals--------
---------------------------- statistic p-valueLag[1]
0.2311 0.6307Lag[p+q+1][1] 0.2311 0.6307Lag[p+q+5][5]
2.9412 0.7090d.o.f=0H0 : No serial correlationQ-Statistics on
Standardized Squared Residuals------------------------------------
statistic p-valueLag[1] 2.137 0.14380Lag[p+q+1][4]
4.191 0.04063Lag[p+q+5][8] 4.680 0.45620d.o.f=3ARCH LM
Tests------------------------------------ Statistic DoF P-
14. ValueARCH Lag[2] 2.461 2 0.2921ARCH Lag[5] 4.904
5 0.4277ARCH Lag[10] 7.140 10 0.7122
Above output shows there is no evidence of autocorrelation in
residuals. They behave like white noise. Also, there is no
evidence of serial correlation in squared residuals. They also
behave like white noise. Adjusted Pearson Goodness-of-Fit
Test:------------------------------------ group statistic p-value(g-
1)1 20 20.00 0.39472 30 32.43 0.30143 40
30.87 0.82034 50 54.56 0.2714
Above output shows that error terms are normally distributed as
all the P values are greater than 0.05.
BEST FIT FOR THE MODEL: garch11 egarch11
gjrgarch11Akaike -3.479720 -3.455538 -3.474405Bayes
-3.435471 -3.411288 -3.412455Shibata -3.479944 -3.455762
-3.474841Hannan-Quinn -3.462310 -3.438127 -3.450030
From above output it can be said that AR(0)-EGARCH(1,1)
have the (smallest) values for all best fit criterion and hence can
be considered as best model.
Forecast Analysis:
RE-FIT MODELS LEAVING 100 OUT-OF-SAMPLE
OBSERVATIONS FOR FORECAST and EVALUATE
STATISTICS for 5 Step Ahead Forecast.
garch11 egarch11 tgarch21 MSE 0.000950199
0.0009498531 0.0009499473MAE 0.02257449 0.0225407
0.02255243 DAC 0.6 0.6 0.6
Since MSE is smallest for eGARCH model. We should consider
this model for forecasting.
CODE:
#Tomas Georgakopoulos CSC 425 Final Project
setwd("C:/Course/CSC425")
15. # Analysis of Google weekly returns from
library(forecast)
library(TSA)
# import data in R and compute log returns
# import libraries for TS analysis
library(zoo)
library(tseries)
myd= read.table('Weekly-GOOG-TSDATA.csv', header=T,
sep=',')
pricets = zoo(myd$price, as.Date(as.character(myd$date),
format=c("%m/%d/%Y")))
#log return time series
rets = log(pricets/lag(pricets, -1))
# strip off the dates and just create a simple numeric object
ret = coredata(rets);
#compute statistics
library(fBasics)
basicStats(rets)
#HISTOGRAM
par(mfcol=c(1,2))
hist(rets, xlab="Weekly Returns", prob=TRUE,
main="Histogram")
#Add approximating normal density curve
xfit<-
seq(min(rets,na.rm=TRUE),max(rets,na.rm=TRUE),length=40)
yfit<-
dnorm(xfit,mean=mean(rets,na.rm=TRUE),sd=sd(rets,na.rm=TR
UE))
lines(xfit, yfit, col="blue", lwd=2)
#CREATE NORMAL PROBABILITY PLOT
16. qqnorm(rets)
qqline(rets, col = 'red', lwd=2)
# creates time plot of log returns
par(mfrow=c(1,1))
plot(rets)
#Perform Jarque-Bera normality test.
normalTest(rets,method=c('jb'))
#SKEWNESS TEST
skew_test=skewness(rets)/sqrt(6/length(rets))
skew_test
print("P-value = ")
2*(1-pnorm(abs(skew_test)))
#FAT-TAIL TEST
k_stat = kurtosis(rets)/sqrt(24/length(rets))
print("Kurtosis test statistic")
k_stat
print("P-value = ")
2*(1-pnorm(abs(k_stat)))
#COMPUTE DICKEY-FULLER TEST
library(fUnitRoots)
adfTest(rets, lags=7, type=c("c"))
# Computes Ljung-Box test on squared returns to test non-linear
independence at lag 6 and 12
Box.test(coredata(rets^2),lag=6,type='Ljung')
Box.test(coredata(rets^2),lag=12,type='Ljung')
Box.test(coredata(rets^2),lag=18,type='Ljung')
# Computes Ljung-Box test on absolute returns to test non-
linear independence at lag 6 and 12
17. Box.test(abs(coredata(rets)),lag=6,type='Ljung')
Box.test(abs(coredata(rets)),lag=12,type='Ljung')
Box.test(abs(coredata(rets)),lag=18,type='Ljung')
# Plots ACF function of vector data
par(mfrow=c(3,1))
acf(ret)
# Plot ACF of squared returns to check for ARCH effect
acf(ret^2)
# Plot ACF of absolute returns to check for ARCH effect
acf(abs(ret))
#plot returns, square returns and abs(returns)
# Plots ACF function of vector data
par(mfrow=c(3,1))
plot(rets, type='l')
# Plot ACF of squared returns to check for ARCH effect
plot(rets^2,type='l')
# Plot ACF of absolute returns to check for ARCH effect
plot(abs(rets),type='l')
par(mfrow=c(1,1))
# plots PACF of squared returns to identify order of AR model
pacf(coredata(rets),lag=30)
#GARCH Models
library(rugarch)
#Fit AR(0,0)-GARCH(1,1) model
garch11.spec=ugarchspec(variance.model=list(garchOrder=c(1,1
)), mean.model=list(armaOrder=c(0,0)),distribution.model =
"std")
#estimate model
18. garch11.fit=ugarchfit(spec=garch11.spec, data=rets)
garch11.fit
#Fit AR(0,0)-eGARCH(1,1) model with Gaussian distribution
egarch11.spec=ugarchspec(variance.model=list(model =
"eGARCH",garchOrder=c(1,1)),
mean.model=list(armaOrder=c(0,0)))
#estimate model
egarch11.fit=ugarchfit(spec=egarch11.spec, data=rets)
egarch11.fit
#Fit AR(0,0)-TGARCH(1,1) model with norm-distribution
gjrgarch11.spec=ugarchspec(variance.model=list(model =
"gjrGARCH",garchOrder=c(2,1)),
mean.model=list(armaOrder=c(0,0)), distribution.model =
"norm")
#estimate model
gjrgarch11.fit=ugarchfit(spec=gjrgarch11.spec, data=rets)
gjrgarch11.fit
# compare information criteria
model.list = list(garch11 = garch11.fit,
egarch11 = egarch11.fit,
gjrgarch11 = gjrgarch11.fit)
info.mat = sapply(model.list, infocriteria)
rownames(info.mat) = rownames(infocriteria(garch11.fit))
info.mat
# RE-FIT MODELS LEAVING 100 OUT-OF-SAMPLE
OBSERVATIONS FOR FORECAST
# EVALUATION STATISTICS
garch11.fit = ugarchfit(spec=garch11.spec, data=rets,
out.sample=100)
egarch11.fit = ugarchfit(egarch11.spec, data=rets,
out.sample=100)
19. tgarch11.fit = ugarchfit(spec=gjrgarch11.spec, data=rets,
out.sample=100)
garch11.fcst = ugarchforecast(garch11.fit, n.roll=100,
n.ahead=1)
egarch11.fcst = ugarchforecast(egarch11.fit, n.roll=100,
n.ahead=1)
tgarch11.fcst = ugarchforecast(tgarch11.fit, n.roll=100,
n.ahead=1)
fcst.list = list(garch11=garch11.fcst,
egarch11=egarch11.fcst,
tgarch21=tgarch11.fcst)
fpm.mat = sapply(fcst.list, fpm)
fpm.mat
CSC425 – Time series analysis and forecasting
Homework 5 – not be submitted
The goal of this assignment is to provide students with some
practical training on
GARCH/EGARCH/TGARCH models for the analysis of the
volatility of a stock return.
Solution
20. s will be posted on Thursday November 7th 2013
Reading assignment:
1. Read Chapter 4 in short book and Chapter 3 in long book on
volatility models
2. Review course documents posted under week 7 and 8.
Problem
Use the datafile nordstrom_w_00_13.csv that contains the
Nordstrom (JWN ) stock weekly prices
from January 2000 to October 2013. The data file contains
dates (date), daily prices (price). You
can also use the code and the analysis of the S&P500 returns
used in week 7 and 8 lectures as
your reference for the analysis of this data. Analyze the
Nordstrom stock log returns following
the steps below.
21. 1. Compute log returns, and analyze their time plot.
Moments
N 693 Sum Weights
693
Mean 0.00268299 Sum Observations
1.85931229
Std Deviation 0.06068019 Variance
0.00368209
Skewness -0.2804891 Kurtosis
7.55351876
Uncorrected SS 2.55299156 Corrected SS
2.54800304
Coeff Variation 2261.66262 Std Error Mean
0.00230505
22. Time plot of returns show that Nordstrom stock returns shows
periods of high volatility at
various times, and consistently high volatility from 2007 to
2010. Most returns are
between +/- 10%. Sample moments show that distribution of log
returns is somewhat
symmetric with very fat tails (kurtosis = 7.55).
return
-0.5
-0.4
-0.3
-0.2
-0.1
23. 0.0
0.1
0.2
0.3
0.4
date
01/01/2000 01/01/2002 01/01/2004 01/01/2006 01/01/2008
01/01/2010 01/01/2012 01/01/2014 01/01/2016
2. Is there evidence of serial correlations in the log returns? Use
autocorrelations and 5%
significance level to answer the question.
Log returns are not serially correlated as shown by the Ljung-
Box test with p-values >
24. 0.05 and the autocorrelation plot. .
Name of Variable = return
Mean of Working Series 0.002683
Standard Deviation 0.060636
Number of Observations 693
Autocorrelation Check for White Noise
To Chi- Pr >
Lag Square DF ChiSq --------------------
Autocorrelations--------------------
6 6.16 6 0.4050 -0.009 0.011 -0.048 -0.009
0.078 -0.006
12 9.08 12 0.6957 -0.042 -0.038 -0.019 0.011
-0.015 0.016
25. 18 26.17 18 0.0960 0.033 -0.026 0.088 -0.032
0.101 -0.056
24 31.58 24 0.1377 0.029 -0.037 0.025 -0.042
-0.026 -0.048
30 33.84 30 0.2871 -0.029 -0.003 -0.037 0.023
0.011 -0.016
3. Is there evidence of ARCH effects in the log returns? Use
appropriate tests at 5%
significance level to answer this question.
The analysis below shows a strong ARCH effect. The squared
returns are strongly
correlated. The Ljung-Box tests on squared residuals are highly
significant with p-values
< 0.001, and autocorrelation plots shows large autocorrelations
for the first 10 lags.
26. Name of Variable = returnsq
Mean of Working Series 0.003684
Standard Deviation 0.011302
Number of Observations 693
Autocorrelation Check for White Noise
To Chi- Pr >
Lag Square DF ChiSq --------------------
Autocorrelations--------------------
6 331.56 6 <.0001 0.534 0.281 0.123 0.101
0.167 0.240
12 407.05 12 <.0001 0.246 0.187 0.053 0.036
0.032 0.082
27. 18 469.70 18 <.0001 0.069 0.072 0.128 0.127
0.156 0.145
24 494.90 24 <.0001 0.149 0.090 0.050 0.035
0.014 0.031
30 501.01 30 <.0001 0.033 0.064 0.051 0.023
0.010 -0.004
4. Fit an GARCH(1,1) model for the log returns using a t-
distribution for the error terms.
Perform model checking (analyze if residuals are white noise, if
squared residuals are
white noise, and check if t-distribution is a good fit for the
data) and write down the fitted
model.
The GARCH model with t-distribution is an adequate model for
the volatility of the
28. returns. All parameters are significant and the squared residuals
are white noise (LB
tests on squared residuals are non significant).
Fitted GARCH model can be expressed as follows:
rt=0.005 +at
at=σtet with σt
2
=0.1483 a
2
t-1+0.836 σ
2
t-1
(Intercept ARCH0 can be considered equal to zero)
Where error term et has t-distribution with 1/0.185=5 degrees
of freedom.
29. The AUTOREG Procedure
GARCH Estimates
SSE 2.55069233 Observations
693
MSE 0.00368 Uncond Var
0.00531549
Log Likelihood 1102.73026 Total R-Square
.
SBC -2172.7554 AIC -
2195.4605
MAE 0.04086373 AICC -
2195.3732
MAPE 113.334007 HQC -
2186.6796
31. Inverse of t DF
R output
Robust Standard Errors:
Estimate Std. Error t value Pr(>|t|)
mu 0.004729 0.001465 3.2281 0.001246
omega 0.000086 0.000050 1.7302 0.083590
alpha1 0.145749 0.052318 2.7858 0.005340
beta1 0.836471 0.051967 16.0962 0.000000
shape 5.345569 1.009241 5.2966 0.000000
5. Fit and EGARCH(1,1) model for the NDX log returns using a
normal distribution for the
error terms. Perform model checking and write down the fitted
model.
The EGARCH model is adequate although the Gaussian
distribution on the error terms is
32. not sufficient to describe most extreme events. No ARCH
effects are shown in residuals.
The leverage parameter “theta” is significant showing that
volatility of Nordstrom stock
returns is affected more heavily by negative shocks.
The fitted model can be written as follows:
rt=0.0012 +at
at=σtet with ln σt
2
=-0.194+0.213 g(et-1) +0.996 ln σ
2
t-1
g(et-1)= -0.47et-1+[|et-1|-E(et-1)]
The AUTOREG Procedure
35. SBC -2154.3958 AIC -
2177.101
MAE 0.04095052 AICC -
2177.0136
MAPE 101.222881 HQC -
2168.32
Normality Test 71.9939
Pr > ChiSq <.0001
Parameter Estimates
Standard Approx
Variable DF Estimate Error t Value
Pr > |t|
Intercept 1 0.001236 0.001752 0.71
0.4806
36. EARCH0 1 -0.1940 0.0636 -3.05
0.0023
EARCH1 1 0.2133 0.0316 6.75
<.0001
EGARCH1 1 0.9662 0.0106 91.41
<.0001
THETA 1 -0.4748 0.1121 -4.23
<.0001
R output
Robust Standard Errors:
Estimate Std. Error t value Pr(>|t|)
mu 0.001271 0.001774 0.71655 0.473654
omega -0.169047 0.110169 -1.53443 0.124924
alpha1 -0.095670 0.035615 -2.68620 0.007227
beta1 0.970507 0.018820 51.56682 0.000000
gamma1 0.194542 0.059579 3.26526 0.001094
In R model is written as
37. rt=0.0012 +at
at=σtet with ln σt
2
=-0.169-0.095( et-1 +0.194(|et-1|-E(et-1))+0.970 ln σ
2
t-1
where error term has t-distribution with 6 degrees of freedom.
Note that the leverage coefficients of et-1 in SAS and R are
equivalent.
6. Fit and EGARCH(1,1) model for the Nordstrom log returns
using a t- distribution for the
error terms. Perform model checking and write down the fitted
model.
38. The EGARCH model t-distributed errors is a good model for the
data. No ARCH effects
are shown in residuals. The leverage parameter “theta” is
significant and negative
showing that volatility of Nordstrom stock returns is affected
more heavily by negative
shocks.
The fitted model can be written as follows:
rt=0.0032 +at
at=σtet with ln σt
2
=-0.191+0.158 g(et-1) +0.974 ln σ
2
t-1
g(et-1)= -0.55et-1+[|et-1|-E(et-1)]
39. et has t-distribution with 6 degrees of freedom.
The MODEL Procedure
Nonlinear Liklhood Summary of Residual
Errors
DF DF Adj
Equation Model Error SSE MSE Root MSE
R-Square R-Sq
7 686 2.5482 0.00371 0.0609 -0.0001
-0.0088
nresid.y 686 1028.4 1.4991 1.2244
Nonlinear Liklhood Parameter Estimates
Approx Approx
Parameter Estimate Std Err t Value Pr
41. beta1 0.977603 0.016446 59.4433 0.000000
gamma1 0.180592 0.075474 2.3928 0.016722
shape 5.783774 1.169335 4.9462 0.000001
In R model is written as
rt=0.0033 +at
at=σtet with ln σt
2
=-0.138-0.104( et-1 +0.180(|et-1|-E(et-1))+0.978 ln σ
2
t-1
where error term has t-distribution with 6 degrees of freedom.
Note that the leverage coefficients of et-1 in SAS and R are
equivalent.
Name of Variable = resid2
42. Mean of Working Series 1.483942
Standard Deviation 2.819517
Number of Observations 693
Autocorrelation Check for White Noise
To Chi- Pr >
Lag Square DF ChiSq --------------------
Autocorrelations--------------------
6 5.11 6 0.5299 0.026 0.020 -0.044 -0.061
0.024 -0.004
12 13.61 12 0.3261 0.025 0.088 0.014 -0.041
0.017 -0.039
18 19.28 18 0.3748 0.043 -0.032 -0.038 -0.028
-0.049 0.020
43. 24 23.55 24 0.4873 0.004 0.028 -0.025 -0.051
-0.039 -0.021
7. Find a GJK-TGARCH(1,1) model for the Nordstrom log
returns using a t-distribution for
the innovations. Perform model checking and write down the
fitted model.
Similar to the AR(0)- EGARCH(1,1) model, the AR(0)-
GJR(1,1) model is also a good
model for the data. No ARCH effects are shown in residuals.
The leverage parameter is
significant and positive showing that volatility of Nordstrom
stock returns is affected
more heavily by negative shocks.
The fitted model can be written as follows:
rt=0.0036 +at
44. at=σtet with σt
2
=(0.034 +0.13Nt-1)a
2
t-1 +0.800 σ
2
t-1
with Nt-1 = 1 if at-1<0, and Nt-1 = 0 if at-1>0
et has t-distribution with 5 degrees of freedom.
The MODEL Procedure
Nonlinear Liklhood Parameter Estimates
Approx Approx
Parameter Estimate Std Err t Value Pr
> |t|
intercept 0.003589 0.00157 2.28
46. gamma1 0.153341 0.074560 2.0566 0.039723
shape 5.767186 1.177356 4.8984 0.000001
The fitted model can be written as follows:
rt=0.0039 +at
at=σtet with σt
2
=(0.039 +0.15Nt-1)a
2
t-1 +0.858 σ
2
t-1
with Nt-1 = 1 if at-1<0, and Nt-1 = 0 if at-1>0
et has t-distribution with 6 degrees of freedom.
47. 8. Is the leverage effect significant at the 5% level?
The leverage parameters in both the EGARCH and the GJR
models are significant, and
have values that indicate that volatility react more heavily to
negative shocks.
9. What model provides the best fit for the data? Explain.
Since the leverage parameters in the AR(0)-EGARCH(1,1) and
AR(0) – GJR(1,1) models
are significant, and residuals are white noise with no ARCH
effect, we can conclude that
both models provide a good explanation of the volatility
behavior for Nordstrom stock
returns. Either models can be chosen, there is no clear winner.
(Note that in SAS, PROC
MODEL does not compute BIC criterion – backtesting can be
48. used to compare models.)
In R the egarch(1,1) and the GJR-GARCH(1,1) models have
very similar BIC values-
garch11.t egarch11 gjrgarch11
Akaike -3.172122 -3.186143 -3.183976
Bayes -3.139358 -3.146827 -3.144660
10. Use the selected model to compute up to 5 step-ahead
forecasts of the simple returns and
its volatility.
The following predictions are based on the AR(0)-
EGARCH(1,1) model. sigma is the
predicted conditional standard deviation or volatility, and series
is the predicted return
(note that predicted returns are the sample mean returns since
returns are white noise)
49. R output for forecasts
sigma series
2013-10-29 0.02732 0.003374
2013-10-30 0.02764 0.003374
2013-10-31 0.02796 0.003374
2013-11-01 0.02827 0.003374
2013-11-04 0.02858 0.003374
SAS code
*import data from file;
proc import datafile='nordstrom_w_00_13.csv' out=myd
replace;
resid
-7
-6
51. quantiles
-7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7
t-distribution QQplot of residuals for GJRGARCH model
S&P500 analysis
Fitting an AR(1)-GARCH(1,1) model with Gaussian errors
> # Fitting a ARMA(0,0)-GARCH(1,1) model using the rugarch
package
> # log returns show weak serial autocorrelations in the mean
and an ARCH
effect
> # Fitting an GARCH(1,1) model
52. > # Use ugarchspec() function to specify model
>
garch11.spec=ugarchspec(variance.model=list(garchOrder=c(1,1
)),
mean.model=list(armaOrder=c(0,0)))
> #estimate model
> garch11.fit=ugarchfit(spec=garch11.spec, data=ret)
> garch11.fit
*---------------------------------*
* GARCH Model Fit *
*---------------------------------*
Conditional Variance Dynamics
-----------------------------------
53. GARCH Model : sGARCH(1,1)
Mean Model : ARFIMA(0,0,0)
Distribution : norm
Optimal Parameters
------------------------------------
Estimate Std. Error t value Pr(>|t|)
mu 0.006814 0.001798 3.7889 0.000151
omega 0.000090 0.000049 1.8302 0.067224
alpha1 0.118390 0.031384 3.7723 0.000162
beta1 0.844396 0.034387 24.5556 0.000000
54. The fitted ARMA(0,0)-GARCH(1,1) model with Gaussian errors
can be written as
222
8444.01184.000009.0
0068.0
tttttt
tt
aea
ar
55. Robust Standard Errors:
Estimate Std. Error t value Pr(>|t|)
mu 0.006814 0.001914 3.5592 0.000372
omega 0.000090 0.000070 1.2858 0.198502
alpha1 0.118390 0.041165 2.8760 0.004028
beta1 0.844396 0.040113 21.0503 0.000000
LogLikelihood : 828.5681
Information Criteria
------------------------------------
Akaike -3.4286
56. Bayes -3.3938
Shibata -3.4287
Hannan-Quinn -3.4149
Q-Statistics on Standardized Residuals
------------------------------------
statistic p-value
Lag[1] 0.5261 0.4682
Lag[p+q+1][1] 0.5261 0.4682
Lag[p+q+5][5] 5.9001 0.3161
d.o.f=0
H0 : No serial correlation
Q-Statistics on Standardized Squared Residuals
60. 3 40 75.76 0.0003804
4 50 79.60 0.0037117
Ljung-Box test for serial correlation
computed on residuals, and Ljung-
Box test for ARCH/GARCH effect
computed on squared residuals.
Goodness of fit test for distribution of error
term. The null hypothesis states that the
distribution for the error terms in the model
is adequate. Thus a small p-value < α,
indicates that null hypothesis can be
rejected and the distribution assumption is
61. not adequate.
> #create selection list of plots for garch(1,1) fit
> plot(garch11.fit)
> #to display all subplots on one page
> plot(garch11.fit, which="all")
Residual analysis of GARCH model shows that the model fits
the data adequately. Ljung Box test (Q-statistic) on
residuals is not significant, showing that hypothesis of no
correlation for residuals cannot be rejected. Similarly the
Ljung Box test (Q-statistic) on the squared standardized
residuals is not significant suggesting that residuals show
no ARCH/GARCH effect. The Adjusted Pearson goodness of fit
test is significant indicating that the normal
distribution assumed for the error term is not appropriate, as
also shown by the QQ plot of the residuals.
62. Fitting an AR(1)-GARCH(1,1) model with t-distributed errors
Elapsed time : 0.260026
> plot(garch11.fit, which="all")
Error in plot.new() : figure margins too large
> # use Student-t innovations
> #specify model using functions in rugarch package
> #Fit ARMA(0,0)-GARCH(1,1) model with t-distribution
>
garch11.t.spec=ugarchspec(variance.model=list(garchOrder=c(1
,1)),
mean.model=list(armaOrder=c(0,0)), distribution.model = "std")
63. > #estimate model
> garch11.t.fit=ugarchfit(spec=garch11.t.spec, data=ret)
> garch11.t.fit
*---------------------------------*
* GARCH Model Fit *
*---------------------------------*
Conditional Variance Dynamics
-----------------------------------
GARCH Model : sGARCH(1,1)
Mean Model : ARFIMA(0,0,0)
Distribution : std
Optimal Parameters
64. ------------------------------------
Estimate Std. Error t value Pr(>|t|)
mu 0.008012 0.001746 4.5892 0.000004
omega 0.000122 0.000068 1.7901 0.073444
alpha1 0.123683 0.038919 3.1780 0.001483
beta1 0.821471 0.050706 16.2008 0.000000
shape 6.891814 1.978494 3.4834 0.000495
The fitted ARMA(0,0)-GARCH(1,1) model with Gaussian errors
can be written as
222
8215.01237.000012.0
71. 3 40 39.50 0.44759
4 50 43.64 0.68967
Residual analysis of GARCH model with t-distributed error
terms shows that the model fits the data adequately.
Ljung Box test (Q-statistic) on residuals is not significant,
showing that hypothesis of no correlation for residuals
cannot be rejected. Similarly the Ljung Box test (Q-statistic) on
the squared standardized residuals is not significant
suggesting that residuals show no ARCH/GARCH effect. The
Adjusted Pearson goodness of fit test is not significant
indicating that the distribution of the error terms can be
described by a t-distribution with 7 degrees of freedom,
as also shown by the QQ plot of the residuals (created using
plot(garch11.t.fit, which=9).
Fitting an ARMA(0,0)-EGARCH(1,1) model
72. The EGARCH model fitted by the rugarch package has a
slightly different form than the model in the textbook.
Here is the general expression of the EGARCH(1,1) model:
)ln(|))(||(|()ln(
,
2
1111111
2
ttttt
tttttt
eEee
73. eaar
where µt can follow an ARMA process, but most typically it
will be constant. Gamma1 (γ1) is the leverage
parameter. So if gamma1 in output is significant, then we can
conclude that the volatility process has an
asymmetric behavior.
Fitting an ARMA(0,0)-EGARCH(1,1) model with Gaussian
distribution (similar to SAS example)
> #Fit ARMA(0,0)-eGARCH(1,1) model with Gaussian
distribution
> egarch11.spec=ugarchspec(variance.model=list(model =
"eGARCH",
74. garchOrder=c(1,1)), mean.model=list(armaOrder=c(0,0)))
> #estimate model
> egarch11.fit=ugarchfit(spec=egarch11.spec, data=ret)
> egarch11.fit
*---------------------------------*
* GARCH Model Fit *
*---------------------------------*
Conditional Variance Dynamics
-----------------------------------
GARCH Model : eGARCH(1,1)
Mean Model : ARFIMA(0,0,0)
Distribution : norm
75. Optimal Parameters
------------------------------------
Estimate Std. Error t value Pr(>|t|)
mu 0.005937 0.001860 3.1915 0.001415
omega -0.610497 0.263901 -2.3134 0.020703
alpha1 -0.111069 0.042614 -2.6064 0.009150
beta1 0.902462 0.041695 21.6442 0.000000
gamma1 0.209405 0.050542 4.1432 0.000034
Using the R output above, the ARMA(0,0)-EGARCH(1,1) model
can be written as follows.
76. Fitted model:
rt = 0.0059 + at, at=σtet
ln(σ2t) = -0.610 + (-0.111 et-1 + 0.209(|et-1| - E(|et-1|)) +
0.9024 ln(σ
2
t-1)
Note that since et has Gaussian distribution, the
E(|et|)=sqrt(2/pi) = 0.7979 or approx 0.80 (see page 143 in
textbook). Thus we can rewrite the expression above as
83. e
e
Therefore, the impact of negative shock of size two-standard
deviations is about 56% higher than the impact of a
positive shock of the same size.
Robust Standard Errors:
Estimate Std. Error t value Pr(>|t|)
mu 0.005937 0.001958 3.0328 0.002423
omega -0.610497 0.364615 -1.6744 0.094060
alpha1 -0.111069 0.066343 -1.6742 0.094096
beta1 0.902462 0.057336 15.7400 0.000000
89. Residual analysis of EGARCH model shows that the model fits
the data adequately – residuals are white noise and
show no ARCH effect. The goodness of fit test supports the
choice of a Gaussian distribution for the error term.
Although the qqplot shows that the error distribution has thicker
left tail than the normal distribution. We will fit
the t-distribution to check if that’s a better fit for the behavior
of extreme values.
Fitting an ARMA(0,0)-EGARCH(1,1) model with t-distribution
> #Fit ARMA(0,0)-eGARCH(1,1) model with t-distribution
> egarch11.t.spec=ugarchspec(variance.model=list(model =
"eGARCH",
garchOrder=c(1,1)), mean.model=list(armaOrder=c(0,0)),
distribution.model = "std")
90. > #estimate model
> egarch11.t.fit=ugarchfit(spec=egarch11.t.spec, data=ret)
> egarch11.t.fit
*---------------------------------*
* GARCH Model Fit *
*---------------------------------*
Conditional Variance Dynamics
-----------------------------------
GARCH Model : eGARCH(1,1)
Mean Model : ARFIMA(0,0,0)
Distribution : std
91. Optimal Parameters
------------------------------------
Estimate Std. Error t value Pr(>|t|)
mu 0.007093 0.001757 4.0369 0.000054
omega -0.677075 0.246000 -2.7523 0.005917
alpha1 -0.145367 0.045752 -3.1773 0.001487
beta1 0.893975 0.038715 23.0912 0.000000
gamma1 0.202717 0.055934 3.6242 0.000290
shape 7.926664 2.405188 3.2957 0.000982
Using the R output above, the ARMA(0,0)-EGARCH(1,1) model
can be written as follows.
Fitted model:
92. rt = 0.007 + at, at=σtet
ln(σ2t) = -0.677 + (-0.145 et-1 + 0.203(|et-1| - E(|et-1|)) + 0.894
ln(σ
2
t-1)
with t-distribution with 8 degrees of freedom (nearest integer to
shape value)
Note that since et has t-distribution, the E(|et|)=
)2/()1(
]2/)1[(22
-
distribution (denoted by shape in R output).
93. Note that since gamma1 is significant, the volatility has an
asymmetric behavior. Therefore a negative shock has a
stronger impact on the volatility compared to a positive shock
of the same size.
Robust Standard Errors:
Estimate Std. Error t value Pr(>|t|)
mu 0.007093 0.001888 3.7574 0.000172
omega -0.677075 0.212770 -3.1822 0.001462
alpha1 -0.145367 0.042936 -3.3857 0.000710
beta1 0.893975 0.034112 26.2071 0.000000
gamma1 0.202717 0.045692 4.4366 0.000009
shape 7.926664 2.720128 2.9141 0.003567
98. Negative Sign Bias 1.2413 0.21510
Positive Sign Bias 0.9747 0.33022
Joint Effect 8.8291 0.03165 **
Adjusted Pearson Goodness-of-Fit Test:
------------------------------------
group statistic p-value(g-1)
1 20 20.66 0.35571
2 30 39.23 0.09736
3 40 54.47 0.05098
4 50 66.51 0.04860
Residual analysis of EGARCH model shows that the model fits
the data adequately – residuals are white noise and
99. show no ARCH effect. However the goodness of fit test shows
that the t-distribution is not a good choice for the
error terms (test p-values are small and rejects the null
hypothesis of error term having a t-distribution. The qqplot
shows that the error distribution has thicker tails than the t-
distribution. (Further analysis shows that the ged
distribution does a better job representing the extreme values
distribution)
Fitting an ARMA(0,0)-GJRGARCH(1,1) model or TGARCH
model
The GJRGARCH(1,1) model fitted by the rugarch package has
the following expression
)ln()()ln(
,
2
101. where µt can follow an ARMA process, but most typically it
will be constant. Nt-1 is the indicator variable s.t.
Nt-1= 1 when at-1 (shock at time t-1) is negative, and Nt-1 = 0
otherwise. Gamma1 (γ1) is the leverage parameter. So
if gamma1 in output is significant, then we can conclude that
the volatility process has an asymmetric behavior.
> #Fit ARMA(0,0)-TGARCH(1,1) model with t-distribution
> gjrgarch11.t.spec=ugarchspec(variance.model=list(model =
"gjrGARCH",
garchOrder=c(1,1)), mean.model=list(armaOrder=c(0,0)),
distribution.model = "std")
> #estimate model
> gjrgarch11.t.fit=ugarchfit(spec=gjrgarch11.t.spec, data=ret)
Warning message:
In .makefitmodel(garchmodel = "gjrGARCH", f = .gjrgarchLLH,
T = T, :
102. NaNs produced
> gjrgarch11.t.fit
*---------------------------------*
* GARCH Model Fit *
*---------------------------------*
Conditional Variance Dynamics
-----------------------------------
GARCH Model : gjrGARCH(1,1)
Mean Model : ARFIMA(0,0,0)
Distribution : std
Optimal Parameters
103. ------------------------------------
Estimate Std. Error t value Pr(>|t|)
mu 0.007202 0.001766 4.077739 0.000045
omega 0.000212 0.000094 2.262826 0.023646
alpha1 0.000001 0.001394 0.000791 0.999369
beta1 0.781111 0.065122 11.994488 0.000000
gamma1 0.215386 0.069611 3.094141 0.001974
shape 7.179979 2.031526 3.534279 0.000409
Using the R output above, the ARMA(0,0)-TGARCH(1,1) model
can be written as follows. Note that the arch(1)
coefficient is zero and we remove it from the model.
2
108. aif
N
For a standardized shock with magnitude 2, (i.e. two standard
deviations), we have
966.1
002.0781.0)002.04()000.0(00021.0
002.0781.0)002.04()215.00000.0(00021.0
)2(
)2(
1
2
1
2
117. Elapsed time : 0.458046
> plot(gjrgarch11.t.fit, which="all")
Residual analysis of TGARCH model shows that the model fits
the data adequately – residuals are white noise and
show no ARCH effect. The goodness of fit test also shows that
the t-distribution is adequate.
Apply information criteria for model selection
> # MODEL COMPARISON
> # compare information criteria
> model.list = list(garch11 = garch11.fit, garch11.t =
garch11.t.fit,
+ egarch11 = egarch11.t.fit,
118. + gjrgarch11 = gjrgarch11.t.fit)
> info.mat = sapply(model.list, infocriteria)
> rownames(info.mat) = rownames(infocriteria(garch11.fit))
> info.mat
garch11 garch11.t egarch11 gjrgarch11
Akaike -3.428558 -3.468892 -3.480644 -3.487042
Bayes -3.393831 -3.425483 -3.428554 -3.434952
Shibata -3.428694 -3.469105 -3.480950 -3.487348
Hannan-Quinn -3.414909 -3.451830 -3.460170 -3.466568
Best model according to model selection criteria is the GJR-
GARCH(1,1) model with t-distribution
119. R CODE:
# Analysis of daily S&P500 index
#
library(fBasics)
library(tseries)
library(rugarch)
# import data in R
# import libraries for TS analysis
myd= read.table('sp500_feb1970_Feb2010.txt', header=T)
# create time series object •
rts= ts(myd$return, start = c(1970, 1), frequency=12)
# create a simple numeric object
120. ret =myd$return;
# CREATE TIME PLOT
plot(rts)
# Plots ACF function of vector data
acf(ret)
# Plot ACF of squared data to check for non-linear dependence
acf(ret^2)
# Computes Ljung-Box test on squared returns to test non-linear
independence at lag 6
and 12
Box.test(ret^2,lag=6,type='Ljung')
Box.test(ret^2,lag=12,type='Ljung')
121. # Computes Ljung-Box test on absolute returns to test non-
linear independence at lag 6
and 12
Box.test(abs(ret),lag=6,type='Ljung')
Box.test(abs(ret),lag=12,type='Ljung')
# FITTING AN GARCH(1,1) MODEL WITH GAUSSIAN
DISTRIBUTION
# Use ugarchspec() function to specify model
garch11.spec=ugarchspec(variance.model=list(garchOrder=c(1,1
)),
mean.model=list(armaOrder=c(0,0)))
#estimate model
garch11.fit=ugarchfit(spec=garch11.spec, data=ret)
122. garch11.fit
#persistence = alpha1+beta1
persistence(garch11.fit)
#half-life: ln(0.5)/ln(alpha1+beta1)
halflife(garch11.fit)
#create selection list of plots for garch(1,1) fit
plot(garch11.fit)
#to display all subplots on one page
plot(garch11.fit, which="all")
#FIT ARMA(0,0)-GARCH(1,1) MODEL WITH T-
DISTRIBUTION
# specify model using functions in rugarch package