This document contains analysis of stationarity and unit root tests for the S&P 500 Index (SPIndex) and Atlanta housing price index (AtlantaHPIndex) time series data. Optimal lags were selected using the Bayesian information criterion. Unit root tests using these lags show that the null hypothesis of non-stationarity cannot be rejected for the SPIndex, but can be rejected for the AtlantaHPIndex, indicating it is stationary.
1. The document presents a Lean Six Sigma project to reduce liquid particle counts (LPC) in plastic injection molded ramp components produced by ultrasonic washing and drying processes.
2. Baseline data shows the current processes have high variability and are not capable of meeting a new, stricter LPC specification required by customers.
3. The project aims to improve the washing processes for two representative ramp products to achieve a process mean LPC lower than 70% of the new specification by analyzing sources of variation and implementing process improvements.
This document provides standard operating procedures and validation methods for a Brookfield Digital Viscometer. It includes steps for operating the viscometer, installing and qualifying the instrument, and performing calibration checks. The calibration check involves measuring the viscosity of a standard fluid at different speeds and ensuring the readings fall within acceptable error limits of the fluid's known viscosity value.
This document provides instructions and code snippets for using a CUBLOC microcontroller development board. It includes code to blink LEDs, read input pins, output to pins in binary and decimal, and use loops and conditional statements. Pinout diagrams and schematics are provided showing the microcontroller and board connections. Code examples demonstrate using loops, conditional statements, debugging, and incrementing values to control outputs.
This document summarizes a simulation of the application circuit for the TB62206FG stepping motor driver IC. The simulation models the key features and components of the IC, including the input logic, oscillator, current level setting, mixed decay control, charge pump unit, H-bridge output, and protection unit. It simulates the phase current at a chopping frequency of 100kHz to predict motor current ripple.
Metrology probe tester equipment instrumentEmily Tan
This document lists various semiconductor manufacturing equipment for sale, including their prices. The most expensive items are an electron beam lithography system for $1,000,000 and an X-ray diffraction system for $78,000. Other listed equipment include wafer probers, thin film measurement systems, surface profilers, and microscopy tools, with prices ranging from $35,000 to under $5,000. The document appears to be from a company called Semistar Corp that sells used semiconductor equipment.
The document provides information about electrical and power control systems, specifically the charging system and power generation voltage variable control system. It includes diagrams of both systems showing component locations and connections. It also describes the components, including that the alternator provides voltage to operate the vehicle and charge the battery while the voltage is controlled by the IC voltage regulator. The power generation voltage variable control system aims to reduce engine load from the alternator and decrease fuel consumption.
Design of Experiments Group Presentation- Spring 2013Charles Kemmerer
This document describes experiments to optimize pipetting parameters for a robotic liquid handler. A screening design of experiments (DOE) was used to evaluate factors affecting pipetting accuracy and precision. Conditioning and aspiration volume had the largest effects on bias and coefficient of variation (%CV). A second DOE with fewer factors confirmed these results and found interactions between aspiration delay, conditioning, and dispense speed. A confirmation run showed settings predicted from the model effects achieved much tighter %CV than original settings or best settings from the DOE runs. While bias could not be improved, the experiments demonstrated pipetting precision can be optimized through experimental design.
1. The document presents a Lean Six Sigma project to reduce liquid particle counts (LPC) in plastic injection molded ramp components produced by ultrasonic washing and drying processes.
2. Baseline data shows the current processes have high variability and are not capable of meeting a new, stricter LPC specification required by customers.
3. The project aims to improve the washing processes for two representative ramp products to achieve a process mean LPC lower than 70% of the new specification by analyzing sources of variation and implementing process improvements.
This document provides standard operating procedures and validation methods for a Brookfield Digital Viscometer. It includes steps for operating the viscometer, installing and qualifying the instrument, and performing calibration checks. The calibration check involves measuring the viscosity of a standard fluid at different speeds and ensuring the readings fall within acceptable error limits of the fluid's known viscosity value.
This document provides instructions and code snippets for using a CUBLOC microcontroller development board. It includes code to blink LEDs, read input pins, output to pins in binary and decimal, and use loops and conditional statements. Pinout diagrams and schematics are provided showing the microcontroller and board connections. Code examples demonstrate using loops, conditional statements, debugging, and incrementing values to control outputs.
This document summarizes a simulation of the application circuit for the TB62206FG stepping motor driver IC. The simulation models the key features and components of the IC, including the input logic, oscillator, current level setting, mixed decay control, charge pump unit, H-bridge output, and protection unit. It simulates the phase current at a chopping frequency of 100kHz to predict motor current ripple.
Metrology probe tester equipment instrumentEmily Tan
This document lists various semiconductor manufacturing equipment for sale, including their prices. The most expensive items are an electron beam lithography system for $1,000,000 and an X-ray diffraction system for $78,000. Other listed equipment include wafer probers, thin film measurement systems, surface profilers, and microscopy tools, with prices ranging from $35,000 to under $5,000. The document appears to be from a company called Semistar Corp that sells used semiconductor equipment.
The document provides information about electrical and power control systems, specifically the charging system and power generation voltage variable control system. It includes diagrams of both systems showing component locations and connections. It also describes the components, including that the alternator provides voltage to operate the vehicle and charge the battery while the voltage is controlled by the IC voltage regulator. The power generation voltage variable control system aims to reduce engine load from the alternator and decrease fuel consumption.
Design of Experiments Group Presentation- Spring 2013Charles Kemmerer
This document describes experiments to optimize pipetting parameters for a robotic liquid handler. A screening design of experiments (DOE) was used to evaluate factors affecting pipetting accuracy and precision. Conditioning and aspiration volume had the largest effects on bias and coefficient of variation (%CV). A second DOE with fewer factors confirmed these results and found interactions between aspiration delay, conditioning, and dispense speed. A confirmation run showed settings predicted from the model effects achieved much tighter %CV than original settings or best settings from the DOE runs. While bias could not be improved, the experiments demonstrated pipetting precision can be optimized through experimental design.
The document discusses the decline in law school applicants over the past decade. Between 2010-2011 and 2013, law school applicants progressively declined. The acceptance rate for law schools increased from 55.6% in 2004 to 76.9% in 2013 as schools attempted to match the reduction in demand with a reduction in supply. Prestigious law schools still have high employment rates around 95% concentrated in large law firms, while most other law schools have employment rates of 80% or above.
Este documento presenta una introducción al modelo de regresión lineal múltiple. Explica que este modelo utiliza más de una variable explicativa para predecir los valores de una variable dependiente de manera más precisa que la regresión lineal simple. Además, describe cómo se estiman los parámetros del modelo mediante el método de mínimos cuadrados y cómo se evalúa la bondad de ajuste del modelo. Finalmente, introduce un ejemplo para ilustrar la aplicación de la regresión lineal múltiple.
The document discusses unrestricted vector autoregression (VAR) models. It analyzes a VAR model using quarterly data on H6 money aggregate DDA, personal income, and 10-year Treasury rates from the early 1960s to 2015. The model includes endogenous and exogenous variables. The main benefits of VAR discussed are that it allows measuring the impact of shocks to endogenous variables on other variables using impulse response functions and forecast error variance decompositions. However, the document notes some limitations of VAR models and questions whether some results like impulse responses truly represent economic relationships.
The document discusses deep learning and deep neural networks. Some key points:
1) A deep neural network (DNN) has at least two hidden layers, whereas a regular neural network only has one hidden layer. DNNs can be thought of as a series of logit regressions with intermediate factors representing hidden layers.
2) Important parameters for DNNs include the number of hidden layers, number of nodes per layer, activation functions, number of iterations, and output function. Tuning these parameters is important.
3) The author tested various DNN structures on a dataset to predict stock market returns, comparing performance to a regression model. DNN models with one hidden layer of 5-7 nodes performed better than the regression
Quantitative method intro variable_levels_measurementKeiko Ono
This document discusses variables, levels of measurement, and key terms in quantitative methods. It defines a variable as a property of an observation that can take on two or more values. There are three levels of measurement for variables: nominal, ordinal, and interval. Nominal variables categorize without order, ordinal can be ordered but differences are not exact, and interval variables have exact differences represented by each value. Appropriate summary statistics depend on the level of measurement, with nominal only allowing frequency and mode, ordinal adding median and range, and interval permitting all including mean, variance, minimum, and maximum.
Lecture notes on Johansen cointegrationMoses sichei
This document discusses the Johansen cointegration procedure and error correction models. It provides an example where there are 3 variables (short-term interest rate, 3-year interest rate, and 10-year interest rate) that are cointegrated with 2 cointegrating relationships. The error correction form of the vector autoregression is shown, with the 2 cointegrating vectors entering each equation. Restrictions can be tested on the coefficients of the cointegrating vectors (beta) using likelihood ratio tests. This allows testing of economic theory restrictions on the long-run relationships between the variables.
These days a lot of data being generated is in the form of time series. From climate data to users post in social media, stock prices, neurological data etc. Discovering the temporal dependence between different time series data is important task in time series analysis. It finds its application in varied fields ranging from advertising in social media, finding influencers, marketing, share markets, psychology, climate science etc. Identifying the networks of dependencies has been studied in this report.
In this report we have study how this problem has been studied in the field of econometrics. We will also study three different approaches for building causal networks between the time series and then see how this knowledge has been used in three completely different fields. At last some important issues are presented and areas in which this can be extended for further research.
This document provides an outline and overview of tutorials for using the STATA data analysis software. It describes STATA's capabilities for data management tasks like sorting, keeping/dropping variables and observations, merging datasets, and working with dates. It lists example datasets from an econometrics textbook that are used in the tutorials. The website www.STATA.org.uk hosts step-by-step screenshot guides for various STATA functions covering data management, statistical analysis, importing data, and more.
Issues associated with Unit Root, multicollinearity, and autocorrelation. Those issues are not as black-and-white as people think they are. They are rather complex and at times even inconclusive. Read why.
This document outlines the generalised method of moments (GMM) estimation technique. It begins with the basic principles of GMM, including that it uses theoretical relations that parameters should satisfy to choose parameter estimates. It then discusses estimating GMM, hypothesis testing with GMM, and extensions such as using GMM with dynamic stochastic general equilibrium (DSGE) models. The document provides details on how population moments relate to sample moments, and how method of moments estimation and instrumental variables estimation can both be viewed as special cases of GMM. It concludes by explaining how the generalized method of moments estimator works by minimizing a weighted distance between sample and population moments.
The document discusses various econometric modeling techniques including regression equations, cointegration, error correction models, vector autoregressive (VAR) modeling, and vector error correction models (VECM). It explains that regression equations can produce spurious results if the data is non-stationary, and that cointegration exists if the residuals from a regression equation are stationary. Error correction models specify the short-run relationship that maintains the long-run equilibrium between cointegrated variables. VAR models express current values of variables as functions of past values, while VECMs are VARs in first differences that incorporate the long-run cointegrating relationships between variables.
This document is an introduction to statistical machine learning presented by Christfried Webers from NICTA and The Australian National University. It discusses linear basis function models and how to perform maximum likelihood and least squares estimation. Specifically, it shows that maximizing the likelihood is equivalent to minimizing the sum-of-squares error, and that the maximum likelihood solution is given by the pseudo-inverse of the design matrix. It also examines the geometry of least squares and the bias-variance decomposition.
This document discusses Granger causality and how to test for it. It provides the following key points:
1) Granger causality measures whether variable A occurs before variable B and helps predict B, but does not guarantee true causality. If A does not Granger cause B, one can be more confident A does not cause B.
2) To test for Granger causality, autoregressive models are developed with and without the variable being tested, and an F-test or t-test is used to see if adding the variable significantly lowers the residuals.
3) The document applies this to test if changes in loans Granger cause changes in deposits using quarterly U.S. financial
This document discusses multiple linear regression analysis. It begins by defining a multiple regression equation that describes the relationship between a response variable and two or more explanatory variables. It notes that multiple regression allows prediction of a response using more than one predictor variable. The document outlines key elements of multiple regression including visualization of relationships, statistical significance testing, and evaluating model fit. It provides examples of interpreting multiple regression output and using the technique to predict outcomes.
This document provides an outline for tutorials on using STATA software for statistical analysis, focusing on linear regressions. It describes datasets used in examples from an econometrics textbook. The outline lists topics covered in the tutorials, including regressions, F-statistics, fitted values, residuals, and challenges. It directs readers to a website for downloading tutorial examples and guides to using STATA for additional statistical techniques.
This is a pretty broad exploration and tutorial of basic econometrics modeling techniques. It includes an introduction to quite a few multiple regression methods. It also includes an extensive coverage of model testing to ensure that your model is quantitatively sound and statistically robust using state of the art peer reviewing protocol.
This document summarizes nonparametric hypothesis testing methods that can be used when sample sizes are small and variables are not normally distributed. It describes the sign test, which tests for differences in paired data by counting how many pairs differ, and the Mann-Whitney U test for unpaired data, which ranks all values together and tests for differences in average ranks between samples. An example compares the income of applicants for fixed-rate versus variable-rate mortgages using the Mann-Whitney U test, finding the probability of difference being due to chance is 18.3%, suggesting variable rate applicants have higher income.
This document provides an overview of qualitative data analysis (QDA) methods. It discusses the origins and current practices of QDA, with a focus on grounded theory. It describes different types of qualitative data that can be analyzed, including interviews, focus groups, observations, documents, and multimedia. The document outlines the QDA process, including data collection and analysis, coding, memo writing, and developing theories. It also discusses several QDA software programs and the steps involved in using AtlasTi for qualitative analysis.
The document analyzes the relationships between inflation, unemployment, and interest rates using a vector autoregression (VAR) model. It finds that:
1) All three variables - inflation, unemployment, and interest rates - are stationary based on tests of their autocorrelation functions.
2) A VAR with a lag length of 2 is optimal based on information criteria.
3) In the estimated VAR models, lags of inflation and unemployment are significant predictors of current inflation, while only lags of unemployment are significant for current unemployment.
4) Diagnostic tests of residuals show white noise, validating the fitted VAR models. Forecasts for 2015-2016 are also generated from the models.
Time series analysis on The daily closing price of bitcoin from the 27th of A...ShuaiGao3
The data we analysed in this report is the The daily closing price of bitcoin from the 27th of April 2013 to the 3rd of March 2018. The objective of this report is to analyze the Bitcoin Closing price by using the time series analysis methods and then choosing the best model among a set of possible models for this dataset and give forecasts of Bitcoin for the next 10 days. The rest of this report is organised as follow. Section 2 describes an overview of our methodology. Section 3 displays data preprocessing for futher analysis. Section 4 discovers a descriptive analysis. Section5focusesonfittingaquadratictimetrendmodel. Section6isforfittingabestARIMAmodel. Section 6 discusses GARCH models by transformed series. Section 7 explores ARMA+GARCH models. At section 8 we will make our final selection for a best fitting model. Section 9 include a mean absolute scaled error (MASE) for each of model fits and forecasts. And the last section concludes with a summary.
The document discusses the decline in law school applicants over the past decade. Between 2010-2011 and 2013, law school applicants progressively declined. The acceptance rate for law schools increased from 55.6% in 2004 to 76.9% in 2013 as schools attempted to match the reduction in demand with a reduction in supply. Prestigious law schools still have high employment rates around 95% concentrated in large law firms, while most other law schools have employment rates of 80% or above.
Este documento presenta una introducción al modelo de regresión lineal múltiple. Explica que este modelo utiliza más de una variable explicativa para predecir los valores de una variable dependiente de manera más precisa que la regresión lineal simple. Además, describe cómo se estiman los parámetros del modelo mediante el método de mínimos cuadrados y cómo se evalúa la bondad de ajuste del modelo. Finalmente, introduce un ejemplo para ilustrar la aplicación de la regresión lineal múltiple.
The document discusses unrestricted vector autoregression (VAR) models. It analyzes a VAR model using quarterly data on H6 money aggregate DDA, personal income, and 10-year Treasury rates from the early 1960s to 2015. The model includes endogenous and exogenous variables. The main benefits of VAR discussed are that it allows measuring the impact of shocks to endogenous variables on other variables using impulse response functions and forecast error variance decompositions. However, the document notes some limitations of VAR models and questions whether some results like impulse responses truly represent economic relationships.
The document discusses deep learning and deep neural networks. Some key points:
1) A deep neural network (DNN) has at least two hidden layers, whereas a regular neural network only has one hidden layer. DNNs can be thought of as a series of logit regressions with intermediate factors representing hidden layers.
2) Important parameters for DNNs include the number of hidden layers, number of nodes per layer, activation functions, number of iterations, and output function. Tuning these parameters is important.
3) The author tested various DNN structures on a dataset to predict stock market returns, comparing performance to a regression model. DNN models with one hidden layer of 5-7 nodes performed better than the regression
Quantitative method intro variable_levels_measurementKeiko Ono
This document discusses variables, levels of measurement, and key terms in quantitative methods. It defines a variable as a property of an observation that can take on two or more values. There are three levels of measurement for variables: nominal, ordinal, and interval. Nominal variables categorize without order, ordinal can be ordered but differences are not exact, and interval variables have exact differences represented by each value. Appropriate summary statistics depend on the level of measurement, with nominal only allowing frequency and mode, ordinal adding median and range, and interval permitting all including mean, variance, minimum, and maximum.
Lecture notes on Johansen cointegrationMoses sichei
This document discusses the Johansen cointegration procedure and error correction models. It provides an example where there are 3 variables (short-term interest rate, 3-year interest rate, and 10-year interest rate) that are cointegrated with 2 cointegrating relationships. The error correction form of the vector autoregression is shown, with the 2 cointegrating vectors entering each equation. Restrictions can be tested on the coefficients of the cointegrating vectors (beta) using likelihood ratio tests. This allows testing of economic theory restrictions on the long-run relationships between the variables.
These days a lot of data being generated is in the form of time series. From climate data to users post in social media, stock prices, neurological data etc. Discovering the temporal dependence between different time series data is important task in time series analysis. It finds its application in varied fields ranging from advertising in social media, finding influencers, marketing, share markets, psychology, climate science etc. Identifying the networks of dependencies has been studied in this report.
In this report we have study how this problem has been studied in the field of econometrics. We will also study three different approaches for building causal networks between the time series and then see how this knowledge has been used in three completely different fields. At last some important issues are presented and areas in which this can be extended for further research.
This document provides an outline and overview of tutorials for using the STATA data analysis software. It describes STATA's capabilities for data management tasks like sorting, keeping/dropping variables and observations, merging datasets, and working with dates. It lists example datasets from an econometrics textbook that are used in the tutorials. The website www.STATA.org.uk hosts step-by-step screenshot guides for various STATA functions covering data management, statistical analysis, importing data, and more.
Issues associated with Unit Root, multicollinearity, and autocorrelation. Those issues are not as black-and-white as people think they are. They are rather complex and at times even inconclusive. Read why.
This document outlines the generalised method of moments (GMM) estimation technique. It begins with the basic principles of GMM, including that it uses theoretical relations that parameters should satisfy to choose parameter estimates. It then discusses estimating GMM, hypothesis testing with GMM, and extensions such as using GMM with dynamic stochastic general equilibrium (DSGE) models. The document provides details on how population moments relate to sample moments, and how method of moments estimation and instrumental variables estimation can both be viewed as special cases of GMM. It concludes by explaining how the generalized method of moments estimator works by minimizing a weighted distance between sample and population moments.
The document discusses various econometric modeling techniques including regression equations, cointegration, error correction models, vector autoregressive (VAR) modeling, and vector error correction models (VECM). It explains that regression equations can produce spurious results if the data is non-stationary, and that cointegration exists if the residuals from a regression equation are stationary. Error correction models specify the short-run relationship that maintains the long-run equilibrium between cointegrated variables. VAR models express current values of variables as functions of past values, while VECMs are VARs in first differences that incorporate the long-run cointegrating relationships between variables.
This document is an introduction to statistical machine learning presented by Christfried Webers from NICTA and The Australian National University. It discusses linear basis function models and how to perform maximum likelihood and least squares estimation. Specifically, it shows that maximizing the likelihood is equivalent to minimizing the sum-of-squares error, and that the maximum likelihood solution is given by the pseudo-inverse of the design matrix. It also examines the geometry of least squares and the bias-variance decomposition.
This document discusses Granger causality and how to test for it. It provides the following key points:
1) Granger causality measures whether variable A occurs before variable B and helps predict B, but does not guarantee true causality. If A does not Granger cause B, one can be more confident A does not cause B.
2) To test for Granger causality, autoregressive models are developed with and without the variable being tested, and an F-test or t-test is used to see if adding the variable significantly lowers the residuals.
3) The document applies this to test if changes in loans Granger cause changes in deposits using quarterly U.S. financial
This document discusses multiple linear regression analysis. It begins by defining a multiple regression equation that describes the relationship between a response variable and two or more explanatory variables. It notes that multiple regression allows prediction of a response using more than one predictor variable. The document outlines key elements of multiple regression including visualization of relationships, statistical significance testing, and evaluating model fit. It provides examples of interpreting multiple regression output and using the technique to predict outcomes.
This document provides an outline for tutorials on using STATA software for statistical analysis, focusing on linear regressions. It describes datasets used in examples from an econometrics textbook. The outline lists topics covered in the tutorials, including regressions, F-statistics, fitted values, residuals, and challenges. It directs readers to a website for downloading tutorial examples and guides to using STATA for additional statistical techniques.
This is a pretty broad exploration and tutorial of basic econometrics modeling techniques. It includes an introduction to quite a few multiple regression methods. It also includes an extensive coverage of model testing to ensure that your model is quantitatively sound and statistically robust using state of the art peer reviewing protocol.
This document summarizes nonparametric hypothesis testing methods that can be used when sample sizes are small and variables are not normally distributed. It describes the sign test, which tests for differences in paired data by counting how many pairs differ, and the Mann-Whitney U test for unpaired data, which ranks all values together and tests for differences in average ranks between samples. An example compares the income of applicants for fixed-rate versus variable-rate mortgages using the Mann-Whitney U test, finding the probability of difference being due to chance is 18.3%, suggesting variable rate applicants have higher income.
This document provides an overview of qualitative data analysis (QDA) methods. It discusses the origins and current practices of QDA, with a focus on grounded theory. It describes different types of qualitative data that can be analyzed, including interviews, focus groups, observations, documents, and multimedia. The document outlines the QDA process, including data collection and analysis, coding, memo writing, and developing theories. It also discusses several QDA software programs and the steps involved in using AtlasTi for qualitative analysis.
The document analyzes the relationships between inflation, unemployment, and interest rates using a vector autoregression (VAR) model. It finds that:
1) All three variables - inflation, unemployment, and interest rates - are stationary based on tests of their autocorrelation functions.
2) A VAR with a lag length of 2 is optimal based on information criteria.
3) In the estimated VAR models, lags of inflation and unemployment are significant predictors of current inflation, while only lags of unemployment are significant for current unemployment.
4) Diagnostic tests of residuals show white noise, validating the fitted VAR models. Forecasts for 2015-2016 are also generated from the models.
Time series analysis on The daily closing price of bitcoin from the 27th of A...ShuaiGao3
The data we analysed in this report is the The daily closing price of bitcoin from the 27th of April 2013 to the 3rd of March 2018. The objective of this report is to analyze the Bitcoin Closing price by using the time series analysis methods and then choosing the best model among a set of possible models for this dataset and give forecasts of Bitcoin for the next 10 days. The rest of this report is organised as follow. Section 2 describes an overview of our methodology. Section 3 displays data preprocessing for futher analysis. Section 4 discovers a descriptive analysis. Section5focusesonfittingaquadratictimetrendmodel. Section6isforfittingabestARIMAmodel. Section 6 discusses GARCH models by transformed series. Section 7 explores ARMA+GARCH models. At section 8 we will make our final selection for a best fitting model. Section 9 include a mean absolute scaled error (MASE) for each of model fits and forecasts. And the last section concludes with a summary.
www.envimart.vn - ĐT: 028 77727979 - sales@envimart.vn - Nền tảng cung cấp thiết bị, vật tư ngành nước và môi trường. Chuyên cung cấp vật tư cho dự án xử lý nước sạch, nước thải và môi trường. Envimart luôn đồng hành, tin cậy với đối tác nhà thầu, nhà tích hợp và người sử dụng.
This document provides documentation and instructions for using an Arduino library for the ADS1256 24-bit analog-to-digital converter. It describes the converter's features like 8 input channels that can be used in single-ended or differential mode, and programmable gain and sampling rates. It also details the converter's 11 registers for configuration, and provides code examples for reading and writing register values to control the converter.
Cd00004444 understanding-and-minimising-adc-conversion-errors-stmicroelectronicsvu CAO
The document discusses understanding and minimizing errors in analog to digital conversion. It explains that the ADC converts analog signals to digital values but is subject to various errors from factors like noise, voltage sources, and PCB layout. It provides techniques to minimize errors, such as reducing noise, improving voltage regulation, and carefully designing PCB layout and analog signal paths.
This document is a textbook on embedded systems programming with the Microchip PIC16F877 microcontroller. It introduces embedded systems and their prevalence. It discusses four levels of embedded systems in terms of size, options, and complexity - from high level systems like air traffic control to low level systems like appliances. The book will cover fundamental and advanced embedded programming techniques that can apply to any microcontroller, as well as an introduction to digital signal processing using the PIC16F877.
This document describes a thesis submitted by Vishal K Gawade and Aayush Garg for their Bachelor's degree. The thesis focuses on modeling and simulation of induction motors and wind turbines. It provides background on vector control of induction motors and describes the mathematical modeling of induction motors. It also covers topics related to wind turbine design such as blade element momentum theory and pitch control. The document includes MATLAB code examples and Simulink models developed as part of the thesis work.
This document provides an introduction to programmable logic controllers (PLCs). It covers PLC hardware inputs and outputs, the IEC 61131-3 programming languages, and ladder diagram programming. The full tutorial explains types and sizes of PLCs, discrete and analog I/O, and network I/O. It also discusses the IEC 61131-3 languages, and provides details on ladder diagram programming including contacts, coils, counters, timers, comparison instructions, arithmetic instructions, and sequencer instructions. The document concludes with sections on human-machine interfaces, derivations, technical references, questions, projects and experiments.
Schneider light control pricelist wef 1st nov 2018Maxpromotion
OM SWITCHGEAR AND ELECTRICALS is a prominent manufacturer and trader of Electrical Control Panel, Electrical Switchgear, Pump, Motor, and Cables. Our products are well appreciated all over India for their quality. We are supported by a transport and packaging facility which helps us in keeping our electrical products safely while ensuring uninterrupted dispatch.
This document provides an operating manual for the KS50-1 and KS52-1 industrial controllers. It describes how to mount and make electrical connections to the controllers, and explains their operation including configuration, parameter settings, calibration, programming, and special functions. Mounting involves fastening the controller housing and setting safety switches for the inputs and outputs. Electrical connections cover power supply, sensor inputs, relay and universal outputs, and optional digital and communication interfaces. Operation involves levels for monitoring, configuration, parameter settings, and calibration. Self-tuning, manual tuning, and PID parameter sets are explained.
www.envimart.vn - ĐT: 028 77727979 - sales@envimart.vn - Nền tảng cung cấp thiết bị, vật tư ngành nước và môi trường. Chuyên cung cấp vật tư cho dự án xử lý nước sạch, nước thải và môi trường. Envimart luôn đồng hành, tin cậy với đối tác nhà thầu, nhà tích hợp và người sử dụng.
The document summarizes the analysis of a large airline dataset to identify the top 4 airports by traffic volume and top 5 airlines operating on routes to and from those airports. It outlines the objectives, data preparation process taking a 1% sample from each of the 303 files and merging them, and provides the results of the analysis identifying ATL, ORD, LAX, and DFW as the top 4 airports and seasonal analysis of flight delays between the major carriers operating on those routes.
This document provides technical documentation for the Twin Line Controller 51x (TLC51x) positioning controller for stepper motors. It includes details on the product, safety guidelines, technical specifications, installation, commissioning, operating modes, functions, diagnosis and service. The TLC51x is a modular positioning controller that can be configured for different applications and interfaces. It provides functions such as manual movement, speed control, point-to-point positioning and electronic gearing.
Brl arpa doploc satellite detection complexClifford Stone
This document provides a technical summary of a research and development program from June 1958 to June 1961 that evaluated the feasibility of using Doppler shift measurements to detect and track non-radiating satellites. An interim research facility was established with an illuminating transmitter and two receiving stations to collect real data and develop computational methods for determining orbital parameters from single-station passes. To address the expected increasing number of satellites, a full-scale system was proposed using a scanned fan beam with a transmitter and receiver separated by 1,000 miles that would synchronously scan a half-cylinder volume out to a radius of 3,000 miles. However, further development of the full-scale system was prevented by an early project cancellation order.
This thesis proposes a technique called live range splitting to reduce spill code generated during register allocation. It identifies low register pressure regions within live ranges where spill code is not needed. The algorithm computes these regions using Reif and Tarjan's algorithm for symbolic covers. It then finds load and store candidates within the low pressure regions to limit spill code insertion. Experimental results show the approach reduces static spills by up to 25% and dynamic loads by up to 20%, with an average execution time decrease of 2.3% and small increase in register allocation time.
A High Speed Successive Approximation Pipelined ADCPushpak Dagade
This document describes a thesis submitted by Pushpak Dagade for the degree of Master of Technology in Integrated Electronics & Circuits. The thesis proposes a new successive approximation pipelined (SAP) ADC architecture to overcome speed limitations of traditional SAR ADCs. It presents the design of a 8-bit SAP ADC including components like a D flip-flop, comparator, and DAC. Simulation results demonstrating the SAP ADC's operation are also included. The thesis concludes with proposals for further work on the schematic, layout, and post-fabrication testing.
A High Speed Successive Approximation Pipelined ADC.pdfKathryn Patel
This document is a thesis submitted by Pushpak Dagade for the degree of Master of Technology in Integrated Electronics & Circuits at the Indian Institute of Technology, Delhi, under the guidance of Prof. G. S. Visweswaran. The thesis presents the design of a high-speed successive approximation pipelined (SAP) analog-to-digital converter (ADC). Chapter 1 introduces successive approximation algorithms and different types of successive approximation ADCs. The aim of the project is to design an 8-bit SAP ADC and demonstrate its potential for high-speed conversion applications.
This document provides information on the STM8S003F3 and STM8S003K3 microcontrollers. Key features include an 8-bit STM8 core running at up to 16 MHz, 8 KB of flash memory, 1 KB of RAM, 128 bytes of EEPROM, a 10-bit ADC, timers, and communication interfaces like UART, SPI, and I2C. The document describes the microcontroller's architecture, peripherals, memory map, electrical characteristics, and development tools.
4. 1 Stationarity and Unit Root Test
1.1 Correlogram of SPIndex
Figure 1: correlagram of S&P 500 Index
From the correlogram of SPIndex we can see ACF starts high and is falling
through time but is still pretty high after 20 lags, which indicates a nonstation-
ary data set.
4
5. 1.2 LB test for SPIndex
LAG ACF PACF Q-STAT. [p-value]
1 0.9748 *** 0.0101629 *** -1.8636 0.0641
2 0.452030 *** 0.0702484 6.4347 0.0000
3 0.9241 *** -0.0130 481.6398 0.0000
4 0.8990 *** -0.0095 627.2279 0.0000
5 0.8746 *** 0.0014 465.8330 0.0000
6 0.8508 *** -0.0010 897.7754 0.0000
7 0.8275 *** -0.0030 1023.3324 0.0000
8 0.8044 *** -0.0082 1142.6946 0.0000
9 0.7814 *** -0.0101 1256.0196 0.0000
10 0.4585 *** -0.0112 1363.4503 0.0000
11 0.7357 *** -0.0103 1465.1438 0.0000
12 0.7131 *** -0.0035 1652.0594 0.0000
13 0.6909 *** -0.0035 1737.8513 0.0000
14 0.6695 *** 0.0026 1737.8513 0.0000
15 0.6488 *** 0.0009 1818.9225 0.0000
16 0.6287 *** 0.0006 1895.5384 0.0000
17 0.6094 *** 0.0022 1967.9673 0.0000
18 0.5904 *** -0.0033 2036.3975 0.0000
19 0.5904 *** -0.0033 2101.0093 0.0000
20 0.5538 *** -0.0013 2261.9961 0.0000
21 0.5362 *** -0.0018 2219.5375 0.0000
22 0.5189 *** -0.0032 2273.7922 0.0000
Furthermore, non-stationarity is proven by the LB test that all the P values
are near 0. Indecates there is no white noise and still some dynamics for us to
capture. Therefore we need to use proper unit root test to determine whether
our data is stationary.
5
6. 1.3 Correlogram for AtlantaHPIndex
Figure 2: correlagram of Atlanta House Pricing Index
From the correlogram of specific AtlantaHPIndex we can see ACF is classic
“start high and stay high”, which indicates a nonstationary data set.
6
7. 1.4 Autocorrelation function for AtlantaHPIndex
LAG ACF PACF Q-STAT. [p-value]
1 0.9842 *** 0.9842 *** 171.4817 0.0000
2 0.9687 *** -0.0022 338.5430 0.0000
3 0.9553 *** -0.0005 501.2976 0.0000
4 0.9385 *** 0.0096 659.9632 0.0000
5 0.9238 *** -0.0052 814.5936 0.0000
6 0.9091 *** -0.0044 965.2503 0.0000
7 0.8943 *** -0.0115 111.9217 0.0000
8 0.8795 *** -0.0079 1254.6383 0.0000
9 0.8644 *** -0.0204 1393.3069 0.0000
10 0.8487 *** -0.0229 1527.8208 0.0000
11 0.8329 *** -0.0147 1658.1635 0.0000
12 0.8170 *** -0.0140 1784.3336 0.0000
13 0.8010 *** -0.0083 1906.3875 0.0000
14 0.7850 *** -0.0141 2024.3255 0.0000
15 0.7690 *** -0.0058 2138.2272 0.0000
16 0.7533 *** -0.0001 2248.2213 0.0000
17 0.7377 *** -0.0053 2354.3853 0.0000
18 0.7220 *** -0.0135 2456.7193 0.0000
19 0.7065 *** -0.0015 2555.3333 0.0000
20 0.6907 *** -0.0179 2650.1943 0.0000
21 0.6746 *** -0.0170 2741.2887 0.0000
22 0.6580 *** -0.0278 2828.5197 0.0000
Furthermore, non-stationarity is proven by the LB test that all the P values
are near 0. Indecates there is no white noise and still some dynamics for us to
capture. Therefore we need to use proper unit root test to determine whether
our data is stationary.
7
8. 1.5 Lag selections
Table 1: BIC results for different lag of SPIndex & AtlantaHPIndex
Lag S&P Atlanta
1 -1725.376* -1539.490
2 -1718.734 -1527.382
3 -1719.351 -1547.213*
4 -1714.473 -1540.906
5 -1700.517 -1526.633
6 -1685.497 -1524.254
7 -1669.186 -1512.989
8 -1656.754 -1502.498
9 -1644.221 -1489.383
10 -1631.732 -1490.806
From the unit root lag selections we choose the lowest BIC. Therefore it is
first lag for S&PIndex and third lag for AtlantaHPIndex
8
9. .
1.6 Unit Root Test for SPIndex
Table 2: Model 24: OLS, using observations 1991:03–2005:06 (T = 172)
Dependent variable: d l SPIndex
Coefficient Std. Error t-ratio p-value
const 0.000626609 0.00392999 0.1594 0.8735
time 1.53504e-005 7.1412e-006 2.5675 0.0111
l SPIndex 1 -0.000249928 0.000952091 -0.2625 0.7933
d l SPIndex 1 0.820463 0.044563 18.4195 0.0000
Mean dependent var 0.005683 S.D. dependent var 0.005705
Sum squared resid 0.000393 S.E. of regression 0.001530
R2
0.929369 Adjusted R2
0.928108
F(3,168) 736.8531 P-value(F) 2.07e-96
Log-likelihood 872.9830 Akaike criterion -1737.966
Schwarz criterion -1725.376 H-Q -1732.858
ˆp 0.162052 Durbin’s h 2.618554
In order to preform a proper unit root test we have to select the lag that has
the lowest SIC value as our optimal lag to model. After generating the model
in gretl (Appendix 9.3). From the output (Appendix 9.3.2) of our model we
choose lag one as our unit root test optimal lag. The results of the unit root
test are shown above. We can see the t-ratio of the log of the SPIndex value is
less than the absolute value of 3.9, this indicates non-stationarity for SPIndex.
9
11. 1.8 Unit Root Test for AtlantaHPIndex
Model 1: OLS, using observations 1991:05–2005:06 (T = 170)
Dependent variable: d l AtlantaHPIndex
Coefficient Std. Error t-raio p-value
const 0.102789 0.0356068 2.8868 0.0044
l AtlantaHPIndex 1 -0.0242917 0.00853775 -2.8452 0.0050
time 0.000102817 3.35030e-005 3.0689 0.0025
d l AtlantaHPIndex 1 0.446385 0.0692713 6.4440 0.0000
d l AtlantaHPIndex 2 0.327009 0.0741171 4.4121 0.0000
d l AtlantaHPIndex 3 -0.395986 0.0682175 -5.8048 0.0000
Mean dependent var 0.003625 S.D. dependent var 0.003044
Sum squared resid 0.000908 S.E. of regression 0.002353
R2
0.419870 Adjusted R2
0.102183
F(5,164) 23.73902 P-value(F) 6.48e-18
Log-likelihood 790.6590 Akaike criterion -1569.318
Schwarz criterion -1550.503 H-Q -1561.683
ˆp 0.088891 Durbin’s h 2.700071
Same procedures for AtlantaHPIndex. After generating the model in gretl (Ap-
pendix 9.4). From the output (Appendix 9.4.2) of our model we choose lag three
as our unit root test optimal lag. The results of the unit root test are shown
above. We can see the t-ratio of the log of the AtlantaHPIndex value is less than
the absolute value of 3.9, this indicates non-stationary for AtlantaHPIndex.
11
12. 1.9 Hypothesis test
H0: dlAtlantaHPIndex = 1 → UnitRoot; non − stationary
H1: dlAtlantaHPIndex < 1 → NoUnitRoot; stationary
∆AtlantaHPIndex = β∗0+β1∗(AtlantaHPIndex)t−1+∆AtlantaHPIndext−1+
∆AtlantaHPIndext−2 + ∆AtlantaHPIndext−3 + ut
12
13. 1.10 Time Series Plot AtlantaHPIndex & SPIndex
Figure 3: Time series Plot AtlantaHPIndex & SPIndex
From the time series of both SPIndex and Atlanta houseing price index we
can see some correlations between the two. It seems both index are moving
towards a increasing trend. However For SPIndex appears a more qudratic
trend and Atlanta Index is more linear compare to SPIndex.
13
14. 2 Cointegrating Regression
2.1 Scatterplot
Figure 4: scatterplot AtlantaHPIndex & SPIndex (with least squares fit)
The scatterplot shows that the Atlanta index moves with the ten city index.
The best fit line follows the general trend of the data and has a positive slope.
This means that according to our graph when x increases, y increases.
14
15. 2.2 Estimations of the Cointegration Residual
Lag BIC
1 -1548.349
2 -1537.330
3 -1553.980*
4 -1548.994
5 -1534.946
6 -1531.406
7 -1520.662
8 -1509.729
9 -1496.274
10 -1498.845
In order to preform a proper cointegration regression we have to select the
lag that has the lowest SIC value as our optimal lag to model. After generating
the model in gretl . From the output of our model we choose lag three as our
engle-granger test optimal lag.
15
16. 2.3 Correlogram of the Cointegration Residual
Figure 5: Correlogram of the cointgration residuals
ACF is falling over time and falls into the confidence interval and PACF
jumped to 0 after lag 1, which indicates stationary data.
16
17. 2.4 Autocorrelation function of cointegraion Residuals
LAG ACF PACF Q-STAT. [p-value]
1 0.9721 *** 0.9721 *** 167.2790 0.0641
2 0.9358 *** -0.1664 ** 323.2093 0.0000
3 0.8927 *** -0.1243 465.9264 0.0000
4 0.8495 *** 0.0078 595.9203 0.0000
5 0.8059 *** -0.0220 713.6020 0.0000
6 0.7632 *** -0.0101 819.7695 0.0000
7 0.7259 *** 0.0765 916.4095 0.0000
8 0.6912 *** -0.0050 1004.5484 0.0000
9 0.6614 *** 0.0428 1085.7421 0.0000
10 0.6343 *** 0.0078 1160.8656 0.0000
11 0.6055 *** -0.0776 1229.7398 0.0000
12 0.5709 *** -0.1218 1291.3620 0.0000
13 0.5245 *** -0.2084 1343.6957 0.0000
14 0.4711 *** -0.0994 1386.1673 0.0000
15 0.4130 *** -0.0528 1419.0153 0.0000
16 0.3565 *** 0.0305 1443.6479 0.0000
17 0.3038 *** 0.0448 1461.6534 0.0000
18 0.2570 *** 0.0492 1474.6184 0.0000
19 0.2098 *** -0.1170 1483.3126 0.0000
20 0.1661 ** -0.0256 1488.8020 0.0000
21 0.1272 * -0.0036 1492.0410 0.0000
22 0.0978 0.1064 1493.9692 0.0000
However according to the LB test all the P values are near 0 which indicates
non-stationary data. This is beaucase gretl is interpreting the LB test results
strictly. Therefore we can say that the cointegration residuals are weakly sta-
tionary.
17
18. 3 Engle-Granger Test of Cointegration
Model 11: OLS, using observations 1991:05–2005:06 (T = 170)
Dependent variable: d CointRe
Coefficient Std. Error t-raio p-value
const -0.00019022 0.0001854 -1.026 0.3065
Coint 1 -0.0189401 0.0101629 -1.855 0.0641
d CointRe 1 0.452030 0.0702484 6.4347 0.0000
d CointRe 2 0.331079 0.0753387 4.3945 0.0000
d CointRe 3 -0.369857 0.0698553 -5.2946 0.0000
Mean dependent var -0.000265 S.D. dependent var 0.002976
Sum squared resid 0.000945 S.E. of regression 0.002386
R2
0.373609 Adjusted R2
0.362288
F(4,166) 24.75252 P-value(F) 4.40e-16
Log-likelihood 787.2614 Akaike criterion -1566.523
Schwarz criterion -1553.980 H-Q -1561.433
ˆp 0.099597 Durbin’s h 3.235601
The results of the Engle-Granger test are shown above. We can see that the t-
ratio of the log of the cointegration residuals’ value is less than the absolute value
of 3.9, which indicates the residuals from the cointegration are not stationary.
Since the residuals are nonstationary. We need to estimate a VAR instead of a
VECM and lag selection.
18
19. 3.1 Hypothesis test
H0: CointRe1 = 1 → UnitRoot; non − stationary
H1: CointRe1 < 1 → NoUnitRoot; stationary
∆CointRe1 = CointRe1 + CointRe1t + CointRe1t−1 + ut
19
20. 4 Estimation of the VAR Model with the Opti-
mal Lag Choice
4.1 Output for Optimal Lag Choice for VAR
Lag BIC
1 -19.144385*
2 -19.023342
3 -19.106130
4 -19.063737
Useing gretl VAR lag slection to choose optimal lag choices for VAR. The lowest
BIC is lag 1 so that will be our optimal lag choice for VAR and the results are
shown below.
20
21. 5 VAR Estimation continue
5.1 The Fit
In equation 1, the adjusted R2
is 0.923847 so 92.3847% of the variation in the
model is explained by the independent variables. Which is a very strong fit.
The SER is 0.001574 so the average distance between the actual data and our
fitted values is 0.1574% which is relatively small. Looking at the p-values we
can see that the lag of the composite index is significant, but the lag of the
Atlanta index is not. Overall equation 1 has a good fit.
For equation 2 the adjusted R2
is 0.246231 so only 24.6231% of the variation
in the model is explained. The SER is 0.002653 so the average distance between
actual data and our fitted value is 0.2653% much larger than equation 1. Finally
looking at the p-values we can see that the lag of the Atlanta index is significant
while the lag of the composite index is not at the 90% confidence level. Overall
the fit of equation 2 is not very strong.
21
22. 5.2 Visual Diagnostics
5.2.1 Combined time series SPIndex VAR Atalanta VAR
Figure 6: Combined time series for SPIndex VAR & AtlantaHPIndex VAR
22
23. 5.2.2 Correlogram
Figure 7: Correlogram of S&P VAR residuals
ACF are pretty much within confidence interval and only a little bit out of
it. Therefore we would say SPIndex are stationary.
23
24. Figure 8: Correlogram of AtlantaHPIndex VAR Residual
There are some ACF values are out of the confidence interval but it is not
too much to say it is non-stationary. Therefore we would say AtlantaHPIndex
are weakly stationary.
24
25. 5.3 White Noise
5.3.1 LB test for SPIndex
LAG ACF PACF Q-STAT. [p-value]
1 0.1097 0.1097 2.1074 0.147
2 0.1275 0.1142 4.8502 0.088
3 -0.1195 -0.1478 * 7.3770 0.061
4 -0.1824 ** -0.1768 ** 13.3037 0.010
5 -0.1675 ** -0.1061 18.3308 0.003
6 -0.0913 -0.0387 19.8324 0.003
7 0.0880 0.1015 21.2367 0.003
8 0.0956 0.0431 22.9043 0.003
9 0.1542 ** 0.0654 27.2687 0.001
10 0.0189 *** -0.0394 27.3350 0.002
11 -0.1113 *** -0.1283 * 29.6391 0.002
12 -0.1640 ** -0.0973 34.6680 0.001
13 -0.1385 * -0.0354 38.2770 0.000
14 -0.0434 0.0114 38.6340 0.000
15 0.0650 0.0472 39.4390 0.001
16 0.1371 * 0.0425 43.0449 0.000
17 0.0440 -0.0751 43.4190 0.000
18 0.1735 ** 0.1373 * 49.2684 0.000
19 -0.0594 -0.0355 49.9592 0.000
20 0.0723 0.1359 * 50.9878 0.000
21 -0.0440 0.0450 51.3721 0.000
22 -0.0030 0.0062 51.3739 0.000
For SPIndex VAR, the P values of LB tests are pretty much all 0. This
indecates that SPIndex VAR is non-stationary. However from the correlogram
we know that gretl is being strict for the ACF that are out of the cofidence
interval. Therefore we could call SPIndex VAR is weakly stationary.
25
26. 5.3.2 LB test for AtalantaHPIndex
LAG ACF PACF Q-STAT. [p-value]
1 -0.0583 -0.0583 0.5949 0.441
2 0.3143 *** 0.3120 *** 17.9880 0.000
3 -0.3621 *** -0.3683 *** 41.2057 0.000
4 0.1474 * 0.0766 4530768 0.000
5 -0.1310 * 0.0974 48.1499 0.000
6 -0.0060 -0.2608 *** 48.1564 0.000
7 -0.1571 ** -0.0548 52.6352 0.000
8 -0.0112 0.0734 52.6583 0.000
9 -0.0758 -0.1561 53.7125 0.000
10 0.0620 0.0395 ** 54.4235 0.000
11 0.1589 ** 0.3290 *** 59.1193 0.000
12 0.2521 *** 0.1488 * 71.0115 0.000
13 0.2439 *** 0.1653 ** 82.2055 0.000
14 0.0763 0.1605 ** 83.3123 0.000
15 -0.0040 -0.0570 83.3123 0.001
16 -0.1467 * -0.1614 ** 87.4399 0.000
17 -0.1007 -0.0479 89.3993 0.000
18 -0.0734 0.0325 90.4468 0.000
19 -0.0033 0.0158 90.4489 0.000
20 -0.1125 -0.1011 92.9404 0.000
21 -0.0708 -0.0365 93.9351 0.000
22 -0.0903 -0.0950 95.5607 0.000
The situation is the same for AtlantaHPIndex VAR. From its LB tests all the
P values are near 0, indicates non-stationary. However if we look at the correl-
ogram we can see almost all the ACF are within confidence interval. Therefore
we could call it weakly stationary as well.
26
27. 6 In-sample VAR forecasts
6.1 In-sample SPIndex VAR forecasts
Figure 9: In-sample VAR Forescast for SPIndex
Mean Squared Error 0.00015244
Mean Absolute Error 0.0093385
Looking at the above graph we can see that our forecast drastically over-
estimate the SP Index. Clearly our model was unable to predict the financial
meltdown of 2007-2008. Once the market recovers by 2010 our forecast looks
better, but it still under and over predicts the SP Index. Our mean squared
error is only 0.00015244 and since the MSE is the average distance between the
actual values and forecasted values in square terms so our forecasting method
is accurate.
27
28. 6.2 In-sample AtlantaHPIndex VAR forecasts
Figure 10: In-sample VAR Forescast for AtlantaHPIndex
Mean Squared Error 0.00027322
Mean Absolute Error 0.01179
Looking at the above graph we can see that our forecast is very different from
the actual values, however it does seem like our forecast accurately predicts the
average trend of the Atlanta index. We can also see some similarities between
Atlanta and the composite data. When Atlanta is mostly below zero so is the
composite, likewise when Atlanta is mostly above zero so is the composite. The
MSE of this forecast 0.00027322 is a little larger than our last MSE, but still
relatively small. Therefore Our forecast for Atlanta is not as accurate as our
forecast for the composite data.
28
29. 7 AR(1) in-sample forecasts
7.1 AR(1) for SPIndex
Figure 11: AR(1) In-sample forescast for SPIndex
Mean Squared Error 0.00047428
Mean Absolute Error 0.020036
Looking at our forecast using AR(1) it’s clear that our predictions vastly
overestimate the actual values of the composite data. This is confirmed by the
fact that the MSE of this forecast is 0.00047428 which is over three times larger
than our MSE using the VAR approach. Therefore the VAR approach is more
accurate.
29
30. 7.2 AR(1) for AtlantaHPIndex
Figure 12: AR(1) In-sample forescast for AtlantaHPIndex
Mean Squared Error 0.00028477
Mean Absolute Error 0.012239
Just like with the VAR approach we can see that our forecast regularly under
and over predicts the actual values for Atlanta, but does seem to capture the
average effectively. The MSE under this method is 0.00028477 which is only
slightly larger than 0.00027322 the MSE under the VAR approach. Therefore
we find the VAR approach to be more accurate, but only barely.
30
31. 8 Out-of sample VAR forecasting
8.1 Out-of sample forecasting for SPIndex VAR
Figure 13: Out-of sample forecasting for SPIndex VAR
Our out of sample forecast for the composite data shows a gradual decline in
housing prices. Declining housing prices can lead to more consumers defaulting
on their morgages so banks will lose money. It will also lead to consumers having
less wealth which can lead to lower spending. Lower spending can decrease the
economic prosperity of the United States which can lead to a recession.
31
32. 8.2 Out-of sample forecasting for AtlantaHPIndex
Figure 14: Out-of sample forecasting for AtlantaHPIndex
Our out of sample forecast for Atlanta show a very gradual increase in hous-
ing prices. Increasing housing prices increase the wealth of home owners which
can lead to higher consumption. This will help to encourage economic growth
in the United States.
32
33. 9 Appendix
9.1 Leg Slection OLS regression for unit root test
9.1.1 Gretl code for OLS regression (SPIndex 10 legs)
ols d_l_SPIndex 0 time l_SPIndex(-1) d_l_SPIndex(-1)
ols d_l_SPIndex 0 time l_SPIndex(-1) d_l_SPIndex(-1 to -2)
ols d_l_SPIndex 0 time l_SPIndex(-1) d_l_SPIndex(-1 to -3)
ols d_l_SPIndex 0 time l_SPIndex(-1) d_l_SPIndex(-1 to -4)
ols d_l_SPIndex 0 time l_SPIndex(-1) d_l_SPIndex(-1 to -5)
ols d_l_SPIndex 0 time l_SPIndex(-1) d_l_SPIndex(-1 to -6)
ols d_l_SPIndex 0 time l_SPIndex(-1) d_l_SPIndex(-1 to -7)
ols d_l_SPIndex 0 time l_SPIndex(-1) d_l_SPIndex(-1 to -8)
ols d_l_SPIndex 0 time l_SPIndex(-1) d_l_SPIndex(-1 to -9)
ols d_l_SPIndex 0 time l_SPIndex(-1) d_l_SPIndex(-1 to -10)
33
34. 9.1.2 Gretl code for OLS regression (AtlantaHPIndex 10 legs)
ols d_l_AtlantaHPIndex time l_AtlantaHPIndex(-1) d_l_AtlantaHPIndex(-1)
ols d_l_AtlantaHPIndex time l_AtlantaHPIndex(-1) d_l_AtlantaHPIndex(-1 to -2)
ols d_l_AtlantaHPIndex time l_AtlantaHPIndex(-1) d_l_AtlantaHPIndex(-1 to -3)
ols d_l_AtlantaHPIndex time l_AtlantaHPIndex(-1) d_l_AtlantaHPIndex(-1 to -4)
ols d_l_AtlantaHPIndex time l_AtlantaHPIndex(-1) d_l_AtlantaHPIndex(-1 to -5)
ols d_l_AtlantaHPIndex time l_AtlantaHPIndex(-1) d_l_AtlantaHPIndex(-1 to -6)
ols d_l_AtlantaHPIndex time l_AtlantaHPIndex(-1) d_l_AtlantaHPIndex(-1 to -7)
ols d_l_AtlantaHPIndex time l_AtlantaHPIndex(-1) d_l_AtlantaHPIndex(-1 to -8)
ols d_l_AtlantaHPIndex time l_AtlantaHPIndex(-1) d_l_AtlantaHPIndex(-1 to -9)
ols d_l_AtlantaHPIndex time l_AtlantaHPIndex(-1) d_l_AtlantaHPIndex(-1 to -10)
34
35. 9.2 cointegrating regression
9.2.1 Gretl code
ols d_CointRe CointRe(-1) d_CointRe(-1)
ols d_CointRe CointRe(-1) d_CointRe(-1 to -2)
ols d_CointRe CointRe(-1) d_CointRe(-1 to -3)
ols d_CointRe CointRe(-1) d_CointRe(-1 to -4)
ols d_CointRe CointRe(-1) d_CointRe(-1 to -5)
ols d_CointRe CointRe(-1) d_CointRe(-1 to -6)
ols d_CointRe CointRe(-1) d_CointRe(-1 to -7)
ols d_CointRe CointRe(-1) d_CointRe(-1 to -8)
ols d_CointRe CointRe(-1) d_CointRe(-1 to -9)
ols d_CointRe CointRe(-1) d_CointRe(-1 to -10)
35
36. 9.3 VAR in sample forecasting
9.3.1 In-sample VAR output
VAR system, lag order 1
OLS estimates, observations 1991:03–2005:06 (T = 172)
Log-likelihood = 1645.37
Determinant of covariance matrix = 1.68288e–011
AIC = −19.0624
BIC = −18.9526
HQC = −19.0179
Portmanteau test: LB(43) = 290.434, df = 168 [0.0000]
Equation 1: d l SPIndex
Coefficient Std. Error t-ratio p-value
const 0.000362622 0.000200586 1.8078 0.0724
d l SPIndex 1 0.949869 0.0215352 44.1077 0.0000
d l AtlantaHPIndex 1 0.00473526 0.0401591 0.1179 0.9063
Mean dependent var 0.005683 S.D. dependent var 0.005705
Sum squared resid 0.000419 S.E. of regression 0.001574
R2
0.924738 Adjusted R2
0.923847
F(2,169) 1038.238 P-value(F) 1.18e95
ˆp 0.109826 Durbin-Watson 1.770112
F-tests of zero restrictions
All lags of dlSPIndex F(1, 169) = 1945.49 [0.0000]
All lags of dlAtlantaHPIndex F(1, 169) = 0.0139034 [0.9063]
36
37. Equation 2: d l AtlantaHPIndex
Coefficient Std. Error t-ratio p-value
const 0.00166369 0.000337961 4.9227 0.000
d l SPIndex 1 0.0589337 0.0362840 1.6242 0.1062
d l AtlantaHPIndex 1 0.459692 0.0676628 6.7939 0.0000
Mean dependent var 0.003602 S.D. dependent var 0.003055
Sum squared resid 0.001189 S.E. of regression 0.002653
R2
0.255047 Adjusted R2
0.246231
F(2,169) 28.92997 P-value(F) 1.57e-11
ˆp -0.059246 Durbin-Watson 2.100654
F-tests of zero restrictions
All lags of dlSPIndex F(1, 169) = 2.63813 [0.1062]
All lags of dlAtlantaHPIndex F(1, 169) = 46.1565 [0.0000]
37
38. 9.3.2 AR(1) In-sample forescast
Model 13: OLS, using observations 1991:03–2005:06 (T = 172)
Dependent variable: d l SPIndex
Coefficient Std. Error t-ratio p-value
const -0.000402302 0.000284916 -1.4120 0.1598
time 1.70494e-005 5.13528e-006 3.3200 0.0011
d l SPIndex 1 0.819612 0.0443024 18.5004 0.0000
Mean dependent var 0.003602 S.D. dependent var 0.003055
Sum squared resid 0.001189 S.E. of regression 0.002653
R2
0.255047 Adjusted R2
0.246231
F(2,169) 28.92997 P-value(F) 1.57e-11
Log-likelihood 777.5473 Akaike criterion -1549.095
Schwarz criterion -1539.652 Hannan-Quinn -1545.263
ˆp 1539.652 Durbin-Watson 2.100654
Model 14: OLS, using observations 1991:03–2005:06 (T = 172)
Dependent variable: d l AtlantaHPIndex
Coefficient Std. Error t-ratio p-value
const 0.00142508 0.000444203 3.2082 0.0016
time 6.10844e-006 1.06303e-006 1.4673 0.1442
d l SPIndex 1 0.467432 0.0669743 6.9793 0.0000
Mean dependent var 0.003602 S.D. dependent var 0.003055
Sum squared resid 0.001193 S.E. of regression 0.002656
R2
0.252935 Adjusted R2
0.244094
F(2,169) 28.60935 P-value(F) 1.99e{11
Log-likelihood 777.5473 Akaike criterion -1549.095
Schwarz criterion -1539.652 Hannan-Quinn -1545.263
ˆp -0.064760 Durbin-Watson -1.776816
38
39. 9.4 Out-of sample VAR forecasting
9.4.1 Out put of Out-of sample VAR forecasting
VAR system, lag order 1
OLS estimates, observations 1991:03–2015:12 (T = 298)
Log-likelihood = 2482.07
Determinant of covariance matrix = 1.99754e–010
AIC = −16.6179
BIC = −16.5435
HQC = −16.5881
Portmanteau test: LB(48) = 899.196, df = 188 [0.0000]
Equation 1: d l SPIndex
Coefficient Std. Error t-ratio p-value
const 0.000178243 0.000139241 1.2801 0.2015
d l SPIndex 1 0.957289 0.0181784 52.6607 0.0000
d l AtlantaHPIndex 1 -0.000291567 0.0132617 -0.0220 0.9825
Mean dependent var 0.003125 S.D. dependent var 0.007896
Sum squared resid 0.001479 S.E. of regression 0.002239
R2
0.920109 Adjusted R2
0.919568
F(2,169) 1698.775 P-value(F) 1.3e162
ˆp 0.116119 Durbin-Watson 1.763733
F-tests of zero restrictions
All lags of dlSPIndex F(1, 295) = 2773.15 [0.0000]
All lags of dlAtlantaHPIndex F(1, 295) = 0.000483369 [0.9825]
39
40. Equation 2: d l AtlantaHPIndex
Coefficient Std. Error t-ratio p-value
const 8.88034e-005 0.000399593 0.2222 0.8243
d l SPIndex 1 0.133320 0.0521682 2.5556 0.0111
d l AtlantaHPIndex 1 0.759362 0.0380583 19.9526 0.0000
Mean dependent var 0.002005 S.D. dependent var 0.010837
Sum squared resid 0.012183 S.E. of regression 0.006426
R2
0.650701 Adjusted R2
0.648333
F(2,169) 274.7739 P-value(F) 4.18e-68
ˆp 0.327013 Durbin-Watson 1.344706
F-tests of zero restrictions
All lags of dlSPIndex F(1, 295) = 6.53095 [0.0111]
All lags of dlAtlantaHPIndex F(1, 295) = 398.107 [0.0000]
40
42. ols d_CointRe CointRe(-1) d_CointRe(-1 to -7)
ols d_CointRe CointRe(-1) d_CointRe(-1 to -8)
ols d_CointRe CointRe(-1) d_CointRe(-1 to -9)
ols d_CointRe CointRe(-1) d_CointRe(-1 to -10)
Question 6
freq uhat11 --normal
corrgm uhat12 22
freq uhat12 --normal
var 3 d_l_SPIndex d_l_HomePriceIndex
smpl 1991:01 2015:12
var 3 d_l_SPIndex d_l_HomePriceIndex
var 3 d_l_SPIndex d_l_HomePriceIndex
var 3 d_l_SPIndex d_l_HomePriceIndex
var 3 d_l_SPIndex d_l_HomePriceIndex --lagselect
var 3 d_l_SPIndex d_l_HomePriceIndex
Question 7
smpl 1991:01 2015:12
diff l_SPIndex
diff l_HomePriceIndex
var 3 d_l_SPIndex d_l_HomePriceIndex
smpl 1991:01 2005:06
var 3 d_l_SPIndex d_l_HomePriceIndex
Question 9
smpl 1991:01 2015:12
var 1 d_l_SPIndex d_l_HomePriceIndex
smpl --full
diff l_HomePriceIndex
smpl 1991:01 2015:12
smpl 1991:01 2005:06
var 1 d_l_SPIndex d_l_HomePriceIndex
smpl --full
var 1 d_l_SPIndex d_l_HomePriceIndex
var 1 d_l_SPIndex d_l_HomePriceIndex
var 1 d_l_SPIndex d_l_HomePriceIndex
42
43. References
[1] The History of Recessions in the United States
http://useconomy.about.com/od/grossdomesticproduct/a/recession histo.htm
[2] A publication of the Board of Governors of the Federal Reserve System
http://www.federalreserve.gov/pf/pf.htm
43