This document presents a study analyzing the time variation paths of factors affecting bank stock returns using a flexible least squares (FLS) method. It discusses motivations, describes the problem and data collection/processing steps. It then provides details on the FLS method, including an example application. The document analyzes a three-index FLS model fit to the data and discusses the role of a parameter μ in identifying time-sensitive FLS coefficients. It outlines a procedure to estimate an appropriate μ value by building ordinary least squares models over different sub-intervals of the time period.
Definition, functions, scope, limitations of statistics; diagrams and graphs; basic definitions and rules for probability, conditional probability and independence of events.
This document provides information and guidelines for students completing an econometrics project for ECON 762. It discusses acceptable topics, data sources, the required proposal and progress report, as well as formatting and content expectations for the final paper. Students must submit a 1-2 page proposal by February 13 describing their research question, data, and methods. A progress report is due April 7 to describe preliminary work and any issues encountered. The final paper should be 15-30 pages following a standard format with sections on introduction, literature review, data and methods, results, and conclusion.
There is a regression relationship between y and at least one of the three independent variables.
The estimated return for FSPTX when the growth index returns 1% and the value index returns -2% is 2.3%.
We cannot reject that the growth index is statistically significant, but we can reject that the value index is statistically significant.
The estimated return for December is 3.01% and for January is 3.04%. We can reject July, September, and October as being statistically insignificant.
We cannot reject all months as being statistically insignificant, so at least one month is significant.
The model predicts a UER of 5.304 for July 1996. The mean absolute deviation of
A power point presentation on statisticsKriace Ward
Statistics originated from Latin, Italian, and German words referring to organized states. Gottfried Achenwall is considered the "father of statistics" for coining the term to describe a specialized branch of knowledge. Modern statistics is defined as the science of judging collective phenomena through analysis and enumeration. While statistics can be an art and a science, its successful application depends on the skill of the statistician and their knowledge of the field being studied. Statistics are important across many domains from business, economics, and planning to the sciences. However, statistics also have limitations such as only studying aggregates, not individuals, and results being valid only on average and in the long run.
The document discusses business statistics and its importance. It defines statistics as the study of collecting, organizing, analyzing, and interpreting numerical data. There are five stages to statistical investigation: data collection, organization, presentation, analysis, and interpretation of results. Statistics helps simplify complex data, facilitate comparison between data sets, test hypotheses, formulate policies, and derive valid inferences. However, statistics has limitations as it does not study individuals, statistical laws are approximations rather than exact, and it only analyzes aggregated data rather than individual observations.
This document discusses various qualitative and quantitative forecasting methods including simple and weighted moving averages, exponential smoothing, and simple linear regression. It provides examples of how to calculate forecasts using each of these methods and evaluates forecast accuracy using metrics like MAD and tracking signal.
This document discusses various methods for organizing and presenting categorical and numerical data using tables, charts, and graphs. It covers summarizing categorical data using summary tables, bar charts, pie charts, and Pareto diagrams. For numerical data, it discusses organizing data using ordered arrays, stem-and-leaf displays, frequency distributions, histograms, frequency polygons, ogives, contingency tables, side-by-side bar charts, and scatter plots. The goal is to effectively communicate patterns and relationships in the data.
This chapter introduces basic concepts in business statistics including how statistics are used in business, types of data and their sources, and popular software programs like Microsoft Excel and Minitab. It discusses descriptive versus inferential statistics and reviews key terminology such as population, sample, parameters, and statistics. The chapter also covers different types of variables, levels of measurement, and considerations for properly using statistical software programs.
Definition, functions, scope, limitations of statistics; diagrams and graphs; basic definitions and rules for probability, conditional probability and independence of events.
This document provides information and guidelines for students completing an econometrics project for ECON 762. It discusses acceptable topics, data sources, the required proposal and progress report, as well as formatting and content expectations for the final paper. Students must submit a 1-2 page proposal by February 13 describing their research question, data, and methods. A progress report is due April 7 to describe preliminary work and any issues encountered. The final paper should be 15-30 pages following a standard format with sections on introduction, literature review, data and methods, results, and conclusion.
There is a regression relationship between y and at least one of the three independent variables.
The estimated return for FSPTX when the growth index returns 1% and the value index returns -2% is 2.3%.
We cannot reject that the growth index is statistically significant, but we can reject that the value index is statistically significant.
The estimated return for December is 3.01% and for January is 3.04%. We can reject July, September, and October as being statistically insignificant.
We cannot reject all months as being statistically insignificant, so at least one month is significant.
The model predicts a UER of 5.304 for July 1996. The mean absolute deviation of
A power point presentation on statisticsKriace Ward
Statistics originated from Latin, Italian, and German words referring to organized states. Gottfried Achenwall is considered the "father of statistics" for coining the term to describe a specialized branch of knowledge. Modern statistics is defined as the science of judging collective phenomena through analysis and enumeration. While statistics can be an art and a science, its successful application depends on the skill of the statistician and their knowledge of the field being studied. Statistics are important across many domains from business, economics, and planning to the sciences. However, statistics also have limitations such as only studying aggregates, not individuals, and results being valid only on average and in the long run.
The document discusses business statistics and its importance. It defines statistics as the study of collecting, organizing, analyzing, and interpreting numerical data. There are five stages to statistical investigation: data collection, organization, presentation, analysis, and interpretation of results. Statistics helps simplify complex data, facilitate comparison between data sets, test hypotheses, formulate policies, and derive valid inferences. However, statistics has limitations as it does not study individuals, statistical laws are approximations rather than exact, and it only analyzes aggregated data rather than individual observations.
This document discusses various qualitative and quantitative forecasting methods including simple and weighted moving averages, exponential smoothing, and simple linear regression. It provides examples of how to calculate forecasts using each of these methods and evaluates forecast accuracy using metrics like MAD and tracking signal.
This document discusses various methods for organizing and presenting categorical and numerical data using tables, charts, and graphs. It covers summarizing categorical data using summary tables, bar charts, pie charts, and Pareto diagrams. For numerical data, it discusses organizing data using ordered arrays, stem-and-leaf displays, frequency distributions, histograms, frequency polygons, ogives, contingency tables, side-by-side bar charts, and scatter plots. The goal is to effectively communicate patterns and relationships in the data.
This chapter introduces basic concepts in business statistics including how statistics are used in business, types of data and their sources, and popular software programs like Microsoft Excel and Minitab. It discusses descriptive versus inferential statistics and reviews key terminology such as population, sample, parameters, and statistics. The chapter also covers different types of variables, levels of measurement, and considerations for properly using statistical software programs.
This document provides an overview of statistics and its two main branches: descriptive statistics and inferential statistics. Descriptive statistics deals with presenting and collecting data through measures of central tendency and variability. Inferential statistics allows conclusions to be made about populations based on sample data through the use of probability. The document concludes by stating it has provided a better understanding of the branches of statistics and information on how to contact the tutoring service for additional help.
Statistics is the science of collecting, analyzing, and interpreting numerical data. It has evolved from early uses by governments to understand populations for taxation and military purposes. Modern statistics developed in the 18th-19th centuries and saw rapid growth in the 20th century with advances in computing. Statistics has two main branches - descriptive statistics which involves data presentation and inference statistics which uses data analysis to make estimates and test hypotheses. Statistics is widely used across many fields including business, economics, mathematics, and banking to facilitate decision making.
Introduction to Statistics - Basic Statistical Termssheisirenebkm
Statistics is the study of collecting, organizing, and interpreting numerical data. It has two main branches: descriptive statistics, which summarizes and describes data, and inferential statistics, which is used to analyze samples and make generalizations about populations. The key concepts in statistics include populations, samples, parameters, statistics, qualitative and quantitative data, discrete and continuous variables.
This document provides an introduction to statistics, covering key topics such as what statistics is, its functions, applications in business, and subject matter. Statistics is defined as both a set of numerical data and a set of techniques for collecting, organizing, analyzing, and interpreting quantitative data. It serves functions like simplifying complex facts, providing comparisons, and forecasting. Statistics is used widely in business decision making across areas like marketing, finance, and operations. The subject matter of statistics has two parts - descriptive statistics, which summarizes data, and inferential statistics, which makes conclusions about large groups by studying samples.
The document discusses the importance of data quality, proper use of statistics, and correct interpretation of results in statistical analysis. It provides a 3 step approach: 1) Ensuring high quality data by addressing issues like missing values and outliers. 2) Appropriate use of statistical techniques after defining the variables and objectives clearly. Considering issues like correlation, normality, and model assumptions. 3) Careful interpretation of results while preserving the multidimensional nature of phenomena and considering partial correlations between variables. It emphasizes the need for collaboration between data miners, statisticians and domain experts for successful knowledge discovery.
This document discusses research methodology and processing of data. It covers editing, coding, classification, and tabulation as important steps in processing data collected during research. Editing involves correcting errors and omissions in the data. Coding assigns standardized codes to responses for efficient analysis. Classification groups the data based on common characteristics. Tabulation arranges the classified data in an organized table for analysis. The document also defines hypothesis and discusses types of hypotheses, characteristics of a good hypothesis, and the procedure for testing hypotheses using statistical techniques. Finally, it defines interpretation as drawing inferences from analyzed data and discusses techniques for proper interpretation.
Introduction to statistics for social sciences 1Minal Jadeja
This document provides an introduction to statistics. It defines statistics as the collection, presentation, analysis, and interpretation of numerical data. Statistics can refer to either quantitative information or a method of dealing with quantitative or qualitative information. There are two main approaches in statistics - descriptive statistics, which deals with presenting data in tables or graphs to get a general picture of a sample, and inferential statistics, which involves techniques for making inferences about a whole population based on a sample. Some key uses and applications of statistics include showing how samples differ from normal distributions, facilitating comparisons, simplifying messages in data, helping to formulate and test hypotheses, and aiding in prediction and inference. However, there are also some limitations to consider with statistics, such
Business Statistics Notes for Business and Commerce DepartmentSeetal Daas
This is composed by me, and this was instructed by Esteemed Sir Haji Ahmed Solangi, during my academic session for semester 3rd 2014 in University of Sindh Laar Campus @ Badin.
This document provides an introduction to business statistics. It defines statistics as the science of collecting, organizing, analyzing, and interpreting numerical data. The document notes that statistics can refer to both quantitative information and the methods used to analyze that information. It describes the key stages of a statistical analysis: data collection, organization, presentation, analysis, and interpretation. The document also discusses whether statistics is a science or an art and the important functions of statistics like providing definiteness, enabling comparison, and aiding in prediction.
Continual improvement of the quality management systemselinasimpson1501
This document provides information about continual improvement of quality management systems, including definitions, core concepts, steps, and common tools. It defines continuous quality improvement (CQI) as an approach that emphasizes continual incremental changes using data analysis to improve processes and meet customer expectations. The document lists and describes several frequently used quality management tools, including check sheets, control charts, Pareto charts, scatter plots, Ishikawa diagrams, histograms, and their purposes. It also provides additional online resources on quality management topics.
This chapter discusses data exploration techniques including preparing data for analysis through data reduction, coding, and descriptive statistics. Graphic and descriptive techniques are used to summarize and describe data numerically and graphically. Common graphs discussed are frequency distributions, scatterplots, line graphs, bar graphs, and box-and-whisker plots which can show relationships between variables and the distribution of data. Checks for invalid, missing, and outlier data are recommended before conducting inferential statistical analyses.
This document discusses quality management. It provides information on quality management forms, strategies and tools. It discusses how high performing organizations practice quality management through trust, integrity, coaching, accountability and leadership. It then describes several quality management tools: check sheets, control charts, Pareto charts, scatter plots and Ishikawa diagrams. These tools can help organizations achieve quality objectives.
The document discusses implementing total quality management (TQM) in education. It describes how TQM was developed and adopted, then explains how some key TQM tools and techniques like the PDCA cycle, flow diagrams, brainstorming, data collection, graphs, and cause-and-effect diagrams can be applied to education to help solve problems and continually improve quality.
This document discusses the scope and uses of statistics across various fields such as planning, economics, business, industry, mathematics, science, psychology, education, war, banking, government, sociology, and more. It outlines functions of statistics like presenting facts, testing hypotheses, forecasting, policymaking, enlarging knowledge, measuring uncertainty, simplifying data, deriving valid inferences, and drawing rational conclusions. It also covers characteristics, advantages, and limitations of statistics.
Nature, Scope, Functions and Limitations of StatisticsAsha Dhilip
This document defines statistics and discusses its uses and limitations. Statistics is defined as the collection, organization, analysis, and interpretation of numerical data in a systematic and accurate manner to draw valid inferences. It is used in business and management for marketing, production, finance, banking, investment, purchasing, accounting, and control. While statistics is useful for simplifying complex data and facilitating comparison, it has limitations in that it only examines quantitative aspects on average, not individuals, and statistical results may not be exact.
This document provides an overview of statistics for social work research. It defines statistics as the science of developing knowledge through empirical data expressed quantitatively, based on probability theory. Statistics involves collecting, summarizing, and analyzing numerical data. Descriptive statistics summarize and describe data, while inferential statistics model patterns in data to draw inferences about populations. The document discusses the characteristics, functions, scope, limitations, and potential misuse of statistics.
This project aims at predicting Defaulters of Credit Card Payment. R programming is used for Exploratory Data Analysis and for Model building R programming and Azure ML is used.
This document provides information about lean quality management including definitions, strategies, and tools. Lean quality management focuses on maximizing customer value and minimizing waste. It treats customers as the most important part of business. The document then describes several quality management tools including check sheets, control charts, Pareto charts, scatter plots, and Ishikawa diagrams that can be used for lean quality management.
The Airports Authority of India (AAI) is responsible for managing airports and providing air traffic control services across India. It generates revenue through airport development, landing/parking fees, and air traffic control services. Key responsibilities include controlling airspace, installing and maintaining communications and navigation equipment, developing and managing terminals, and providing air traffic control, rescue and fire services, and security. AAI oversees air traffic control, sets air routes, and provides area flight information, notices to airmen, and communications, navigation, and surveillance services through technologies like radar, VHF radio, and navigation aids.
Debates on Open Source Software: "The house believes that the future of Web in UK Higher and Further Education communities lies in the adoption of open source software".
See http://www.ukoln.ac.uk/web-focus/events/workshops/webmaster-2002/debate/
Este documento describe los 8 pasos para el secreto del éxito, incluyendo pensamientos, programas, palabras, actitudes, hábitos, carácter y destino. Explica que siguiendo estos pasos ayudará a las personas a pensar y hablar de manera adecuada para tener éxito en la vida, ya que enfocarse en pensamientos y un carácter positivos conducirá al éxito.
This document provides an overview of statistics and its two main branches: descriptive statistics and inferential statistics. Descriptive statistics deals with presenting and collecting data through measures of central tendency and variability. Inferential statistics allows conclusions to be made about populations based on sample data through the use of probability. The document concludes by stating it has provided a better understanding of the branches of statistics and information on how to contact the tutoring service for additional help.
Statistics is the science of collecting, analyzing, and interpreting numerical data. It has evolved from early uses by governments to understand populations for taxation and military purposes. Modern statistics developed in the 18th-19th centuries and saw rapid growth in the 20th century with advances in computing. Statistics has two main branches - descriptive statistics which involves data presentation and inference statistics which uses data analysis to make estimates and test hypotheses. Statistics is widely used across many fields including business, economics, mathematics, and banking to facilitate decision making.
Introduction to Statistics - Basic Statistical Termssheisirenebkm
Statistics is the study of collecting, organizing, and interpreting numerical data. It has two main branches: descriptive statistics, which summarizes and describes data, and inferential statistics, which is used to analyze samples and make generalizations about populations. The key concepts in statistics include populations, samples, parameters, statistics, qualitative and quantitative data, discrete and continuous variables.
This document provides an introduction to statistics, covering key topics such as what statistics is, its functions, applications in business, and subject matter. Statistics is defined as both a set of numerical data and a set of techniques for collecting, organizing, analyzing, and interpreting quantitative data. It serves functions like simplifying complex facts, providing comparisons, and forecasting. Statistics is used widely in business decision making across areas like marketing, finance, and operations. The subject matter of statistics has two parts - descriptive statistics, which summarizes data, and inferential statistics, which makes conclusions about large groups by studying samples.
The document discusses the importance of data quality, proper use of statistics, and correct interpretation of results in statistical analysis. It provides a 3 step approach: 1) Ensuring high quality data by addressing issues like missing values and outliers. 2) Appropriate use of statistical techniques after defining the variables and objectives clearly. Considering issues like correlation, normality, and model assumptions. 3) Careful interpretation of results while preserving the multidimensional nature of phenomena and considering partial correlations between variables. It emphasizes the need for collaboration between data miners, statisticians and domain experts for successful knowledge discovery.
This document discusses research methodology and processing of data. It covers editing, coding, classification, and tabulation as important steps in processing data collected during research. Editing involves correcting errors and omissions in the data. Coding assigns standardized codes to responses for efficient analysis. Classification groups the data based on common characteristics. Tabulation arranges the classified data in an organized table for analysis. The document also defines hypothesis and discusses types of hypotheses, characteristics of a good hypothesis, and the procedure for testing hypotheses using statistical techniques. Finally, it defines interpretation as drawing inferences from analyzed data and discusses techniques for proper interpretation.
Introduction to statistics for social sciences 1Minal Jadeja
This document provides an introduction to statistics. It defines statistics as the collection, presentation, analysis, and interpretation of numerical data. Statistics can refer to either quantitative information or a method of dealing with quantitative or qualitative information. There are two main approaches in statistics - descriptive statistics, which deals with presenting data in tables or graphs to get a general picture of a sample, and inferential statistics, which involves techniques for making inferences about a whole population based on a sample. Some key uses and applications of statistics include showing how samples differ from normal distributions, facilitating comparisons, simplifying messages in data, helping to formulate and test hypotheses, and aiding in prediction and inference. However, there are also some limitations to consider with statistics, such
Business Statistics Notes for Business and Commerce DepartmentSeetal Daas
This is composed by me, and this was instructed by Esteemed Sir Haji Ahmed Solangi, during my academic session for semester 3rd 2014 in University of Sindh Laar Campus @ Badin.
This document provides an introduction to business statistics. It defines statistics as the science of collecting, organizing, analyzing, and interpreting numerical data. The document notes that statistics can refer to both quantitative information and the methods used to analyze that information. It describes the key stages of a statistical analysis: data collection, organization, presentation, analysis, and interpretation. The document also discusses whether statistics is a science or an art and the important functions of statistics like providing definiteness, enabling comparison, and aiding in prediction.
Continual improvement of the quality management systemselinasimpson1501
This document provides information about continual improvement of quality management systems, including definitions, core concepts, steps, and common tools. It defines continuous quality improvement (CQI) as an approach that emphasizes continual incremental changes using data analysis to improve processes and meet customer expectations. The document lists and describes several frequently used quality management tools, including check sheets, control charts, Pareto charts, scatter plots, Ishikawa diagrams, histograms, and their purposes. It also provides additional online resources on quality management topics.
This chapter discusses data exploration techniques including preparing data for analysis through data reduction, coding, and descriptive statistics. Graphic and descriptive techniques are used to summarize and describe data numerically and graphically. Common graphs discussed are frequency distributions, scatterplots, line graphs, bar graphs, and box-and-whisker plots which can show relationships between variables and the distribution of data. Checks for invalid, missing, and outlier data are recommended before conducting inferential statistical analyses.
This document discusses quality management. It provides information on quality management forms, strategies and tools. It discusses how high performing organizations practice quality management through trust, integrity, coaching, accountability and leadership. It then describes several quality management tools: check sheets, control charts, Pareto charts, scatter plots and Ishikawa diagrams. These tools can help organizations achieve quality objectives.
The document discusses implementing total quality management (TQM) in education. It describes how TQM was developed and adopted, then explains how some key TQM tools and techniques like the PDCA cycle, flow diagrams, brainstorming, data collection, graphs, and cause-and-effect diagrams can be applied to education to help solve problems and continually improve quality.
This document discusses the scope and uses of statistics across various fields such as planning, economics, business, industry, mathematics, science, psychology, education, war, banking, government, sociology, and more. It outlines functions of statistics like presenting facts, testing hypotheses, forecasting, policymaking, enlarging knowledge, measuring uncertainty, simplifying data, deriving valid inferences, and drawing rational conclusions. It also covers characteristics, advantages, and limitations of statistics.
Nature, Scope, Functions and Limitations of StatisticsAsha Dhilip
This document defines statistics and discusses its uses and limitations. Statistics is defined as the collection, organization, analysis, and interpretation of numerical data in a systematic and accurate manner to draw valid inferences. It is used in business and management for marketing, production, finance, banking, investment, purchasing, accounting, and control. While statistics is useful for simplifying complex data and facilitating comparison, it has limitations in that it only examines quantitative aspects on average, not individuals, and statistical results may not be exact.
This document provides an overview of statistics for social work research. It defines statistics as the science of developing knowledge through empirical data expressed quantitatively, based on probability theory. Statistics involves collecting, summarizing, and analyzing numerical data. Descriptive statistics summarize and describe data, while inferential statistics model patterns in data to draw inferences about populations. The document discusses the characteristics, functions, scope, limitations, and potential misuse of statistics.
This project aims at predicting Defaulters of Credit Card Payment. R programming is used for Exploratory Data Analysis and for Model building R programming and Azure ML is used.
This document provides information about lean quality management including definitions, strategies, and tools. Lean quality management focuses on maximizing customer value and minimizing waste. It treats customers as the most important part of business. The document then describes several quality management tools including check sheets, control charts, Pareto charts, scatter plots, and Ishikawa diagrams that can be used for lean quality management.
The Airports Authority of India (AAI) is responsible for managing airports and providing air traffic control services across India. It generates revenue through airport development, landing/parking fees, and air traffic control services. Key responsibilities include controlling airspace, installing and maintaining communications and navigation equipment, developing and managing terminals, and providing air traffic control, rescue and fire services, and security. AAI oversees air traffic control, sets air routes, and provides area flight information, notices to airmen, and communications, navigation, and surveillance services through technologies like radar, VHF radio, and navigation aids.
Debates on Open Source Software: "The house believes that the future of Web in UK Higher and Further Education communities lies in the adoption of open source software".
See http://www.ukoln.ac.uk/web-focus/events/workshops/webmaster-2002/debate/
Este documento describe los 8 pasos para el secreto del éxito, incluyendo pensamientos, programas, palabras, actitudes, hábitos, carácter y destino. Explica que siguiendo estos pasos ayudará a las personas a pensar y hablar de manera adecuada para tener éxito en la vida, ya que enfocarse en pensamientos y un carácter positivos conducirá al éxito.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise boosts blood flow and levels of neurotransmitters and endorphins which elevate and stabilize mood.
I am determined professional, committed to my goals and a passionately driven performer having the physical and mental capacity to relentlessly pursue assigned targets and tasks to successful completion.
Mohammed SH Abdallah earned a Certificate of Achievement for successfully completing a course in Principles of Project Management. The certificate details that he passed the course on December 17th, 2013 with a final score of 85% by completing assessments in four modules, scoring highest in Project Management Overview and The Develop Phase.
Los documentos proporcionan información sobre varias batallas históricas importantes. La Batalla de Platea (479 a.C.) fue una victoria decisiva de los griegos sobre los persas en la Guerra Greco-Persa. La Batalla de las Termópilas (480 a.C.) involucró a 300 espartanos que lucharon valientemente contra el ejército persa. Otras batallas descritas incluyen la Batalla de Kadesh (1274 a.C.), donde el faraón Ramses II se enfrentó al ejército hitita, y
This document discusses quantitative approaches to forecasting, including time series analysis and forecasting techniques. It covers the components of a time series, including trends, cycles, seasonality, and irregular components. Specific quantitative forecasting approaches covered include smoothing methods like moving averages, weighted moving averages, and exponential smoothing. Examples are provided to demonstrate how to perform moving averages and exponential smoothing on time series data for sales of headache medicine. The document aims to teach readers how to analyze time series data and select appropriate forecasting techniques.
The document discusses various quantitative forecasting techniques including smoothing methods, trend projection, and regression analysis. It provides examples of using moving averages, weighted moving averages, and exponential smoothing to forecast sales data for Robert's Drugs. Specifically, it calculates the mean squared error for different smoothing techniques including a two period moving average, three period moving average, and exponential smoothing with alphas of 0.1 and 0.2 to determine the best method for the Robert's Drugs data.
1. The document discusses modern banking strategies for managing risk and selecting profitable investment portfolios. It addresses questions about optimal portfolio structure in variable interest rate environments, appropriate banking products, and successful risk management.
2. Banks can calculate expected returns on asset groups to inform investment decisions, though some may prefer lower risk assets even with similar expected returns. Duration matching of assets and liabilities can help mitigate interest rate risk.
3. Banks employ tools like gap analysis, repricing schedules, and derivatives to manage their exposure to interest rate movements and ensure accurate understanding of risks from their asset-liability mix. Portfolio structure and risk management techniques are crucial for banks' financial stability and performance.
This document summarizes a master's thesis project that investigates using break-point analysis to predict bankruptcy risk. The project was motivated by weaknesses in previous bankruptcy prediction studies, namely that they did not consider the timing of financial data or rates of change. The project applies Bayesian change-point (break-point) analysis to financial ratios of firms, to identify significant changes that may indicate distress. If break-points are found closer to a firm's bankruptcy, it could improve prediction compared to only considering financial ratio levels. The document describes the data, analytical methods, and preliminary exploration and application of break-point analysis to the ratios.
This document provides an introduction to panel data analysis and regression models for panel data. It defines panel data as longitudinal data collected on the same units (like individuals, firms, countries) over multiple time periods. Panel data allow researchers to study changes over time and estimate causal effects. The document outlines common panel data structures, reasons for using panel data analysis, and basic estimation techniques like fixed effects and random effects models to account for unobserved heterogeneity across units. It also discusses assumptions and limitations of different panel data models.
The document discusses various methods for forecasting future demand based on past demand information. It describes qualitative methods like market research and quantitative methods like time series analysis. Time series methods discussed include simple and weighted moving averages to predict the next period's demand, as well as exponential smoothing which weights recent observations more than past ones. Linear regression is also covered as a way to explore relationships between dependent and independent variables. The document emphasizes that accuracy of forecasts should be evaluated using metrics like mean absolute deviation and mean forecast error in order to compare different forecasting models.
This document discusses various forecasting methods including qualitative methods like panel consensus and quantitative methods like time series analysis. It explains moving averages, weighted moving averages, and exponential smoothing for time series forecasting. Moving averages are simple to calculate but do not respond well to trends while exponential smoothing accounts for trends using smoothing constants. Linear regression can also be used to explore relationships between dependent and independent variables for forecasting. Overall the key points are that forecasting predicts future demand based on past data, different quantitative methods are suited to different situations, and accuracy depends on how well past patterns predict the future.
This document provides an overview of using Stata for data management and reproducible research. It describes the Stata environment including the toolbar, command panel, review panel, results panel and variables panel. It demonstrates loading sample data using sysuse and viewing metadata about the data using describe and summary statistics using summarize. Reproducible research is facilitated by writing commands in a do-file that can be executed from the do-file editor.
This document discusses various forecasting methods including:
- Calculating forecasts using moving averages, weighted moving averages, and exponential smoothing
- Choosing the appropriate forecasting model based on data availability, time horizon, required accuracy, and resources
- Comparing forecast accuracy using metrics like forecast error which measure the difference between actual and forecasted values
This document provides an overview of key concepts in business statistics including applications, basic terminology, populations and samples, descriptive statistics, and statistical inference. It discusses how statistics are used in various business fields such as accounting, finance, marketing, production, and economics. Basic terms are defined including data, qualitative and quantitative variables, observations, and elements. The difference between populations and samples is explained. Descriptive statistics and statistical inference are introduced along with an example analyzing repair costs at an auto shop.
Here are 3 practice problems using quantitative forecasting methods:
1. Using simple exponential smoothing, forecast next period's sales given the following data with a smoothing constant of 0.3:
Period: Sales
1: 100
2: 110
3: 120
4: ?
Forecast: F1 = 100
F2 = 100 + 0.3(110 - 100) = 103
F3 = 103 + 0.3(120 - 103) = 108.9
F4 = 108.9 + 0.3(120 - 108.9) = 113.67
2. Using linear regression, forecast next year's profits based on advertising expenditures given:
Year: Prof
The document discusses various forecasting techniques used in business analytics. It begins by explaining the importance of forecasting and defining time-series data components like trend, seasonality, cyclicality and irregular components. It then covers techniques like moving average, single exponential smoothing, Holt's method, Croston's method and regression models. It also discusses identifying appropriate autoregressive (AR) and moving average (MA) models using autocorrelation functions and model selection techniques like ARIMA.
This document provides an overview of an analytical methods course for economics and finance. It introduces the course staff and coordinators. It describes how econometrics can be used to answer quantitative questions about economics and business. It also discusses different types of economic data and some basic mathematical and statistical concepts needed for the course, including summation, probability, and random variables. An important note reminds students about class attendance, staff consultation hours, accessing learning materials, and preparing for an upcoming online quiz.
This document discusses bivariate linear regression and its understanding. Bivariate linear regression, also called simple linear regression, involves modeling the relationship between a dependent variable (Y) and a single independent variable (X). The regression equation takes the form of Y = β0 + β1X + ε, where β0 is the intercept, β1 is the slope coefficient, and ε is the error term. This equation can be used to predict Y values based on X values, as well as understand how much variation in Y can be explained by X. Parameters β0 and β1 are estimated to maximize the explanatory power of X for Y while minimizing prediction errors.
This document describes a big data analysis project conducted by a group of students. Various predictive analytics methods were used to analyze transaction data from a store, including moving average, exponential smoothing, regression, and k-means clustering. The results and analyses are presented in tables and graphs. Recommendations will be made to the store manager based on the findings to help improve sales.
This document provides an introduction to econometrics. It defines econometrics as the application of statistical and mathematical techniques to economic data in order to test economic theories and models. The document outlines the methodology of econometrics, including stating an economic theory, specifying mathematical and econometric models, obtaining data, estimating models, hypothesis testing, forecasting, and using models for policy purposes. It also discusses the structure of economic data such as time series, cross-sectional, and panel data. Finally, it covers key econometric concepts like the categories of variables and the differences between ratio and interval scales.
- Forecasting helps reduce risk and uncertainty in decision making by predicting future outcomes.
- There are three main types of forecasting methods: qualitative, extrapolative/time series, and causal/explanatory.
- Time series forecasting uses historical data patterns to predict future values, accounting for trends, seasonality, cycles, and randomness. Common time series forecasting techniques include moving averages, weighted moving averages, and exponential smoothing.
Predicting an Applicant Status Using Principal Component, Discriminant and Lo...inventionjournals
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
This document discusses econometrics and its applications. It defines econometrics as using statistical methods to estimate economic relationships and test economic theories. Econometrics allows estimating relationships between economic variables, testing hypotheses, and forecasting. It helps explain qualitative economic data quantitatively and evaluate government policies. Common econometric methods discussed include simple and multiple linear regression, estimation theory, and time series analysis. The document also notes some limitations of econometrics, such as not proving causation and possible issues with data interpretation.
This document describes building models to predict credit card default payments. It retrieves credit card data from a public dataset containing details on customers' personal information, credit limits, payment histories and default statuses. The data is explored through visualizations to identify relationships between variables. Two classification models are built using KNN and decision tree algorithms. The decision tree model achieves a higher accuracy of 80% compared to KNN's 74% accuracy, indicating decision trees are more suitable for predicting default payments from this credit card data.
1. The Time Variation Paths of Factors Effecting Bank
Stock Returns
Kaiyi Chen
Department of Mathematics
University of Central Arkansas
August 5, 2015
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 1 / 43
2. Outline
1 Motivation
2 Problem Description
3 Data
4 Flexible Least Squares (FLS) Method
5 Analysis & Discussion
6 Conclusion
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 2 / 43
3. Outline
1 Motivation
2 Problem Description
3 Data
4 Flexible Least Squares (FLS) Method
5 Analysis & Discussion
6 Conclusion
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 3 / 43
4. Motivation
In my undergraduate, my second major was in finance, and I always
interested in bank performance.
According to the previous study of Dr.Ling He, bank stock returns
depend on adjusted monthly percentage changes in stock market ,
interest rates on long-term government bond, and monthly changes in
median sales price of new house during 1972-1995. Hence, I am
curious if these factors still affect bank stock returns now. In this
study we fitted model with more recent data.
In order to investigate the effect of the time-varying independent
variables, we use the FLS method developed by Kalaba and Tesfatsion
to fit model. In this study, we implemented the FLS method in R.
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 4 / 43
5. Outline
1 Motivation
2 Problem Description
3 Data
4 Flexible Least Squares (FLS) Method
5 Analysis & Discussion
6 Conclusion
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 5 / 43
6. Problem Description
Investors have long been interested in both risk and performance of
commercial banks.
Upon reviewing the empirical literature, equity market factors are
useful in predicting future bank holding company performance.
More recently, a number of empirical studies show the potential
effectiveness of using bond prices and spread in predicting bank risk.
Real estate yields could be another factor affects bank stock returns,
as commercial banks have crucial holdings of both residential and
commercial real estate mortgages.
To quantify the performance of commercial banks, a three index model
was fitted by using the variables mentioned above.
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 6 / 43
7. Outline
1 Motivation
2 Problem Description
3 Data
4 Flexible Least Squares (FLS) Method
5 Analysis & Discussion
6 Conclusion
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 7 / 43
8. Data
Collection
We collected data from Nov.1990 to Nov. 2014 for the following variables:
NASDAQ bank index
S&P 500 index
Interest rates on long-term government bonds
median sales price of new houses
3-month treasury bill
The first two sets of data are collected from Yahoo Finance. The rest of
datas are collected from Federal Reserve Bank of St. Louis.
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 8 / 43
9. Data
Collection
Figure 1: Original Data of S&P 500
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 9 / 43
10. Data
Processing
In order to minimize the effect of the magnitude of data of the
variables, we used monthly changes of NASDAQ bank index, S&P
500 index and median sales price for new houses
In order to get risk premium model, all these 4 variables are further
adjusted by subtracting corresponding 3-month treasury bills (T-bill)
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 10 / 43
11. Data
Processing
Figure 2: Monthly data substract corresponding 3-month treasury bills
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 11 / 43
12. Data
Descriptive Analysis
We denote these processed variables as follows:
y = monthly changes in NASDAQ bank stock index adjusted for T-bills
x1 = monthly percentage changes in S&P 500 adjusted for T-bills
x2 = interest rates on long-term government bond adjusted for T-bills
x3 = monthly changes in median sales price of new houses sold in the U.S.
adjusted for T-bills
Variables Mean Std.Dev Minimun Maximum
y -0.019 0.050 -0.249 0.140
x1 -0.021 0.046 -0.195 0.108
x2 0.017 0.015 -0.012 0.050
x3 -0.025 0.047 -0.134 0.111
Table 1: Summary Statistics from 1990 to 2014
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 12 / 43
13. Data
Descriptive Analysis
Figure 3: Time series of 4 variables
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 13 / 43
14. Data
Description Analysis
Figure 4: Correlation scatterplot matrix
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 14 / 43
15. Outline
1 Motivation
2 Problem Description
3 Data
4 Flexible Least Squares (FLS) Method
5 Analysis & Discussion
6 Conclusion
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 15 / 43
16. FLS
Ordinary Least Squares (OLS) Model
The general form of Ordinary Least Squares model is estimated as:
y = b1x1 + b2x2 + ... + bkxk + (1)
where y is the dependent variable. xi , i = 1, 2, ...k, are n independent
variables. bj , j = 1, 2, .., k, are parameters of the model.
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 16 / 43
17. FLS
Flexible Least Squares Method
Suppose noisy scalar observations y1, y2, ..., yT obtained on a process
over a time-span 1, 2, ..., T are assumed to have been generated by a
linear regression model with coefficients which evolve only slowly over
time, if at all. More precisely, suppose these prior theoretical beliefs
take the following form and it is FLS model:
yt = xT
t bt + , t = 1, 2, ..., T, (2)
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 17 / 43
18. FLS
Error Terms
An incompatibility cost:
= µr2
D + r2
M (3)
consisting of the µ-weighted average of the associated dynamic error and
measurement error sums.
Where
r2
M = (y − ˆy)2
r2
D = (bt+1 − bt)T
(bt+1 − bt)
The coefficients of the model will be estimated by minimizing the
incompatibility cost.
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 18 / 43
19. FLS
Role of µ
The predetermined positive constant µ of the FLS model plays a crucial
role in identifying suitable FLS coefficients. The value of µ is inversely
proportional to the time sensitivity of FLS coefficients.
For i = 1, 2, 3,
bit → bi as µ → ∞ (4)
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 19 / 43
21. FLS
Example
We have taken this example from Kalaba and Tesfatsion[p.1228].
In this example, the yn, n = 1, 2, ..., T, values are computed as follows:
yt = xT
t bt + , t = 1, 2, ..., T, (5)
where
xt=
sin(10 + t) + 0.01
cos(10 + t)
, t = 1,2,...,T
bt=
sin(4πt/10)
sin(2πt/10)
, t = 1,2,...,T
We used the computed values of yt and xt for different values of µ & T
using this equations as inputs for the R code we developed, and computed
bt.
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 21 / 43
22. FLS
Example
0 200 400 600 800 1000
−1.0−0.50.00.51.01.52.0
n
ExactvsEstimated
Exact b1
Estimated b1
Exact b2
Estimated b2
Figure 5: Estimated & exact coefficients when µ = 1000 and T = 1000
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 22 / 43
23. FLS
Example
0 200 400 600 800 1000
−1.0−0.50.00.51.01.52.0
n
ExactvsEstimated
Exact b1
Estimated b1
Exact b2
Estimated b2
Figure 5: Estimated & exact coefficients when µ = 10 and T = 1000
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 22 / 43
24. FLS
Example
0 200 400 600 800 1000
−1.0−0.50.00.51.01.52.0
n
ExactvsEstimated
Exact b1
Estimated b1
Exact b2
Estimated b2
Figure 5: Estimated & exact coefficients when µ = 0.1 and T = 1000
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 22 / 43
25. FLS
Example
When µ value is fixed, the coefficients get better fit as T increases.
0 5 10 15 20 25 30
−1.0−0.50.00.51.01.52.0
n
ExactvsEstimated
Exact b1
Estimated b1
Exact b2
Estimated b2
Figure 6: Estimated & exact coefficients when µ = 1 and T = 30
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 23 / 43
26. FLS
Example
When µ value is fixed, the coefficients get better fit as T increases.
0 20 40 60 80 100
−1.0−0.50.00.51.01.52.0
n
ExactvsEstimated
Exact b1
Estimated b1
Exact b2
Estimated b2
Figure 6: Estimated & exact coefficients when µ = 1 and T = 100
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 23 / 43
27. FLS
Example
When µ value is fixed, the coefficients get better fit as T increases.
0 200 400 600 800 1000
−1.0−0.50.00.51.01.52.0
n
ExactvsEstimated
Exact b1
Estimated b1
Exact b2
Estimated b2
Figure 6: Estimated & exact coefficients when µ = 1 and T = 1000
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 23 / 43
28. Outline
1 Motivation
2 Problem Description
3 Data
4 Flexible Least Squares (FLS) Method
5 Analysis & Discussion
6 Conclusion
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 24 / 43
29. Analysis & Discussion
Building FLS Model
According to Dr.He’s the previous study, the three-index FLS model
was built here:
yt = b1tx1t + b2tx2t + b3tx3t + , n = 1, 2, ..., T (6)
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 25 / 43
30. Analysis & Discussion
Building FLS Model
According to Dr.He’s the previous study, the three-index FLS model
was built here:
yt = b1tx1t + b2tx2t + b3tx3t + , n = 1, 2, ..., T (6)
The corresponding OLS model will be:
y = b1x1 + b2x2 + b3x3 + (7)
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 25 / 43
31. Analysis & Discussion
Role of µ
−0.50.00.51.01.52.0
Time
Estimatedcoefficients
MTK BOND PRICE
1990 1993 1996 1999 2002 2005 2008 2011 2014
Figure 7: FLS estimated coefficients when µ = 0.1
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 26 / 43
32. Analysis & Discussion
Role of µ
−0.50.00.51.01.52.0
Time
Estimatedcoefficients
MTK BOND PRICE
1990 1993 1996 1999 2002 2005 2008 2011 2014
Figure 7: FLS estimated coefficients when µ = 0.5
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 26 / 43
33. Analysis & Discussion
Role of µ
−0.50.00.51.01.52.0
Time
Estimatedcoefficients
MTK BOND PRICE
1990 1993 1996 1999 2002 2005 2008 2011 2014
Figure 7: FLS estimated coefficients when µ = 0.9
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 26 / 43
34. Analysis & Discussion
Procedure to estimate µ
How to estimate a proper µ value?
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 27 / 43
35. Analysis & Discussion
Procedure to estimate µ
How to estimate a proper µ value?
We answer this question by building various OLS models for different
non-overlapping sub-intervals of entire time region.
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 27 / 43
36. Analysis & Discussion
Procedure to estimate µ
How to estimate a proper µ value?
We answer this question by building various OLS models for different
non-overlapping sub-intervals of entire time region.
We identify sub-interval by using the procedure discussed in the next
slide.
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 27 / 43
37. Analysis & Discussion
Procedure to estimate µ
1 START by setting a value for µ (choose from (0,0.9)).
2 Build a FLS model for the entire time period [1, T].
3 Identify time-varying paths (time-series plots) of FLS coefficients.
4 Identify FLS coefficients whose signs of the time-varying paths do not
change. If either one of the conditions following conditions fails, go to
Step 1 to repeat Steps 1 through 4 for different value for µ; else go to
Step 5:
a. whether the corresponding OLS coefficients are statistically
significant, and
b. whether the signs of the time-varying paths of FLS coefficients are
the same as the signs of the corresponding OLS coefficients.
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 28 / 43
38. Analysis & Discussion
Procedure to estimate µ
5 Identify a FLS coefficient satisfying the following conditions: Over
interval [1, T],
a. at least there is one sign change, and
b. minimum number of sign changes when compared to other FLS
coefficients.
6 Identify the time points at which the FLS coefficient identified in Step
3 change signs.
7 Divide the interval [1, T] into subintervals with endpoints
corresponding to the points identified in Step 5 including the initial
point 1, and the end point T.
8 Construct as many OLS models as the subintervals identified in Step
7 using the corresponding datasets of these intervals.
9 Check a. whether the coefficients of the two OLS models are
statistically significant, and
b. whether the signs of the OLS coefficients are the same as the signs
of the corresponding FLS coefficient identified in Step 4.
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 29 / 43
39. Analysis & Discussion
Procedure to estimate µ
10 If both conditions of Step 9 are satisfied, then the path of the FLS
coefficient identified in Step 4 is a reliable path to analyze the
dependencies between y and the corresponding independent variable.
11 Identify the next FLS coefficient satisfying conditions in Step 4, and
repeat Steps 5 through 9. If there are no more coefficients, then
EXIT the procedure.
12 If either one of the conditions of Step 8 is not satisfied, then adjust
time points found in Step 5 (which also includes the scenario where
some of the subintervals in Step 6 are either merged or split) and
repeat Steps 8 through 11. If either one of the conditions of Step 8 is
still not satisfied, go to Step 1 to repeat all steps for another value of
µ.
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 30 / 43
40. Analysis & Discussion
Estimate µ value for x1 (stock market)
For x1, the coefficient remained positive throughout the entire time
period no matter how µ value changed. The fitted OLS model whose
results are shown below:
Variables Est.Coef P − value
x1 0.7572 < 0.001
x2 0.1554 0.1116
x3 0.0765 < 0.1
Table 2: Time period from Nov.1990 to Nov.2014
Hence, the value of µ could as small as we want. Then we can say
the µ value for the FLS model doesn’t depend on x1
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 31 / 43
41. Analysis & Discussion
Estimate µ value for x2 (goverment bond)
We identified three sub-intervals (Nov.1990 to May 2006, Jun.2006 to
Oct.2008, and Nov.2008 to Nov.2014) with the FLS model when
µ = 0.5 .
The following two slides show results of OLS models and a figure for
FLS & OLS estimations.
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 32 / 43
42. Analysis & Discussion
Estimate µ value for x2 (goverment bond)
Variables Est.Coef P − value
x1 0.67527 < 0.001
x2 0.2315 < 0.05
x3 0.08048 0.1292
Table 3: Time period from Nov.1990 to May 2006
Variables Est.Coef P − value
x1 0.8194 < 0.001
x2 0.7675 0.2221
x3 0.1402 0.388138
Table 4: Time Period from Jun.2006 to Oct.2008
Variables Est.Coef P − value
x1 1.11081 < 0.001
x2 -0.4773 < 0.01
x3 0.13168 0.07335
Table 5: Time Period from Nov.2008 to Nov.2014
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 33 / 43
43. Analysis & Discussion
Estimate µ value for x2 (goverment bond)
−0.50.00.51.0
time
OLS&FLSresultforGB
Estimated FLS
Estimated OLS
1990 1993 1996 1999 2002 2005 2008 2011 2014
Figure 8: FLS & OLS estimation for BOND
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 34 / 43
44. Analysis & Discussion
Estimate µ value for x3 (sales price of new houses)
Variables Est.Coef P − value
x1 0.7362 0.001
x2 0.1789 0.395
x3 -0.1118 0.319
Table 6: Time Period from Nov.1990 to Jan.1994
Variables Est.Coef P − value
x1 0.75452 < 0.001
x2 0.03971 0.7412
x3 0.1099 < 0.05
Table 7: Time Period from Feb.1994 to Nov.2014
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 35 / 43
45. Analysis & Discussion
Estimate µ value for x3 (sales price of new houses)
−0.50.00.51.0
time
OLS&FLSresultforPrice
Estimated FLS
Estimated OLS
1990 1993 1996 1999 2002 2005 2008 2011 2014
Figure 9: FLS & OLS estimation for PRICE
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 36 / 43
46. Analysis & Discussion
Summary
The market performance, based on S&P 500 index, had significant
positive impact during the entire period Nov.1990 through Nov.2014.
The long-term government bond rates had significant positive impact
during Nov.1990 through May 2006 and had no significant impact
during Jun.2006 through Oct.2008 and had significant negative
impact during Nov.2008 through Nov.2014.
The sales price of new houses had a positive significant impact after
Jan.1994 and had a decline during 2007 through 2011.
These mixed impacts of the independent variables reflect the impact of the
events, which occurred during the period that covered the entire financial
meltdown caused by massive subprime mortgages, on the bank stock
returns.
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 37 / 43
47. Outline
1 Motivation
2 Problem Description
3 Data
4 Flexible Least Squares (FLS) Method
5 Analysis & Discussion
6 Conclusion
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 38 / 43
48. Conclusion
The objectives of this study are to implement the Flexible
least-squares (FLS) method in R, validate it, and use it to compute
the time varying coefficients of S&P 500 index, interest rate on
long-term government bond, and median sale price of new house in
order to analyze the effect of these variables on the NASDAQ bank
stock index for the period Nov 1990 to Nov 2014.
Major findings of this study include: long-term government bond
rates had no impact on banks’ benefit during time period Jun.2006 to
Oct.2008, and there is a decline trend for coefficients of sales price of
new house during 2007 to 2011.
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 39 / 43
49. Possible extension
One possible extension of the present work is to automate this
procedure adaptively based on the OLS coefficients and their
statistical significance. This would significantly reduce the time in
identifying reliable time varying paths of FLS coefficients more
objectively.
The second possible extension of this study would be to replace S&P
500 index data with New York Stock Exchange (NYSE) composite
index and study the effect of time varying coefficient of this data on
bank stock returns.
As another possible extension, more interesting and relevant
independent variables can be added to the model to study their effect
on bank stock returns.
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 40 / 43
50. Reference I
Ling T.He and Alan K. Reichert
Time variation paths of factors affecting financial institutions and
stock returns.
Atlantic Economic Journal,31(1):71-86, 2003.
R.Kalaba and L.Tesfatsion
Time-varing linear regression via Flexible Least Squares.
Computers Math.Applic,17(8/9):1215-1245, 1989
YAHOO FINANCE
NASDAQ Bank Index
http://finance.yahoo.com/q/hp?s=%5EBANK+Historical+Prices
Federal Reserve Bank of ST. Louis
Long-Term Government Bond Yields
https://research.stlouisfed.org/fred2/series/IRLTLT01DEM156N
Kaiyi Chen (University of Central Arkansas) FLS Model for Bank Stock Returns August 5, 2015 41 / 43