International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
In this paper it is considered that the series may have a repetitive or
cyclical behavior across time by referring to the Fourier analysis; which
is an important part in the modern treatment of Economic Time Series.
The goal of this is to test the causality Granger between financial
deepening and economic performance in MENA countries using the
spectral analysis that is a special case of the Fourier analysis; according
to different time horizons (short, medium and long term) without
subdividing the study period which extends from 1970 to 2014. For
reliable results, the sample was divided into two subsamples, the
countries of the Gulf Cooperation Council (GCC), which have a high
income, and other countries.
On high frequencies, estimates show that the real and financial sectors
maintain causal relationships, showing a limit of the conventional
method of causality assessment that sets in many cases a complete lack
of connection between the proxies. In the long term, finance dominates
in some Gulf Cooperation Council (GCC) countries while we have the
opposite effect in other countries.
The main conclusion that one can reach is that the causal relationship
between finance and growth is not linear, but it varies depending on the
chosen time horizon.
Aplicaciones de espacios y subespacios vectoriales en la carrera de Electróni...MATEOESTEBANCALDERON
Los espacios y sub-espacios vectoriales están aplicados en muchos campos de la vida cotidiana, en ingeniería, es muy útil para todo sin embargo en este trabajo analizaremos su aplicación a un área específica de la ingeniería electrónica y automatización.
In this paper it is considered that the series may have a repetitive or
cyclical behavior across time by referring to the Fourier analysis; which
is an important part in the modern treatment of Economic Time Series.
The goal of this is to test the causality Granger between financial
deepening and economic performance in MENA countries using the
spectral analysis that is a special case of the Fourier analysis; according
to different time horizons (short, medium and long term) without
subdividing the study period which extends from 1970 to 2014. For
reliable results, the sample was divided into two subsamples, the
countries of the Gulf Cooperation Council (GCC), which have a high
income, and other countries.
On high frequencies, estimates show that the real and financial sectors
maintain causal relationships, showing a limit of the conventional
method of causality assessment that sets in many cases a complete lack
of connection between the proxies. In the long term, finance dominates
in some Gulf Cooperation Council (GCC) countries while we have the
opposite effect in other countries.
The main conclusion that one can reach is that the causal relationship
between finance and growth is not linear, but it varies depending on the
chosen time horizon.
Aplicaciones de espacios y subespacios vectoriales en la carrera de Electróni...MATEOESTEBANCALDERON
Los espacios y sub-espacios vectoriales están aplicados en muchos campos de la vida cotidiana, en ingeniería, es muy útil para todo sin embargo en este trabajo analizaremos su aplicación a un área específica de la ingeniería electrónica y automatización.
This study aims to detect the trend in seasonal rainfall of four rainy months i.e. June, July, August and
September. To determine the trend of rainfall, non-parametric Mann-Kendall test and non-parametric Sen’s
Slope estimator for determination of magnitude of trend was used. Linear regression analysis, which is a useful
parametric model used to developed functional relationships between variables was also applied to determine
trend of rainfall for the study area. From all statistical test results it was indicated that there was some change in
the trend of rainfall of the rainy months
Week 4 forecasting - time series - smoothing and decomposition - m.awaluddin.tMaling Senk
Forecasting - time series - smoothing and decomposition methods
Smoothing Method as Moving Averages and exponetial methods. The steps for decomposition methods and example of it. Case study for smothing methods in Single Exponential Smoothing, Double Exponential Smoothing and Triple Exponential Smoothing
A Study on Performance Analysis of Different Prediction Techniques in Predict...IJRES Journal
Time series data is a series of statistical data that is related to a specific instant or a specific time period. Here, the measurements are recorded on a regular basis such as monthly, quarterly and yearly. Most of the researchers have used one of the prediction techniques in prediction of time series data. But, they have not tested all prediction techniques on same data set. They have not even compared the performance of different prediction techniques on the same data set. In this research work, some well known prediction techniques have been applied in the same time series data set. The average error and residual analysis have been done for each and every applied technique. One technique has been selected based on the minimum average error and residual analysis among the all applied techniques. The residual analysis comprises of absolute residual, maximum residual, median of absolute residual, mean of absolute residual and standard deviation. To finalize the algorithm, same procedure has been applied on different time series data sets. Finally, one technique has been selected which has been given minimum error and minimum value of residual analysis in most cases.
The models, principles and steps of Bayesian time series analysis and forecasting have been established extensively during the past fifty years. In order to estimate parameters of an autoregressive (AR) model we develop Markov chain Monte Carlo (MCMC) schemes for inference of AR model. It is our interest to propose a new prior distribution placed directly on the AR parameters of the model. Thus, we revisit the stationarity conditions to determine a flexible prior for AR model parameters. A MCMC procedure is proposed to estimate coefficients of AR(p) model. In order to set Bayesian steps, we determined prior distribution with the purpose of applying MCMC. We advocate the use of prior distribution placed directly on parameters. We have proposed a set of sufficient stationarity conditions for autoregressive models of any lag order. In this thesis, a set of new stationarity conditions have been proposed for the AR model. We motivated the new methodology by considering the autoregressive model of AR(2) and AR(3). Additionally, through simulation we studied sufficiency and necessity of the proposed conditions of stationarity. The researcher, additionally draw parameter space of AR(3) model for stationary region of Barndorff-Nielsen and Schou (1973) and our new suggested condition. A new prior distribution has been proposed placed directly on the parameters of the AR(p) model. This is motivated by priors proposed for the AR(1), AR(2),..., AR(6), which take advantage of the range of the AR parameters. We then develop a Metropolis step within Gibbs sampling for estimation. This scheme is illustrated using simulated data, for the AR(2), AR(3) and AR(4) models and extended to models with higher lag order. The thesis compared the new proposed prior distribution with the prior distributions obtained from the correspondence relationship between partial autocorrelations and parameters discussed by Barndorff-Nielsen and Schou (1973).
Time Series Analysis - 1 | Time Series in R | Time Series Forecasting | Data ...Simplilearn
This Time Series Analysis (Part-1) in R presentation will help you understand what is time series, why time series, components of time series, when not to use time series, why does a time series have to be stationary, how to make a time series stationary and at the end, you will also see a use case where we will forecast car sales for 5th year using the given data. A time series is a sequence of data being recorded at specific time intervals. The past values are analyzed to forecast a future which is time-dependent. Compared to other forecast algorithms, with time series we deal with a single variable which is dependent on time. So, lets deep dive into this presentation and understand what is time series and how to implement time series using R.
Below topics are explained in this "Time Series in R Tutorial" -
1. Why time series?
2. What is time series?
3. Components of a time series
4. When not to use time series?
5. Why does a time series have to be stationary?
6. How to make a time series stationary?
7. Example: Forcast car sales for the 5th year
Become an expert in data analytics using the R programming language in this data science certification training course. You’ll master data exploration, data visualization, predictive analytics and descriptive analytics techniques with the R language. With this data science course, you’ll get hands-on practice on R CloudLab by implementing various real-life, industry-based projects in the domains of healthcare, retail, insurance, finance, airlines, music industry, and unemployment.
Why learn Data Science with R?
1. This course forms an ideal package for aspiring data analysts aspiring to build a successful career in analytics/data science. By the end of this training, participants will acquire a 360-degree overview of business analytics and R by mastering concepts like data exploration, data visualization, predictive analytics, etc
2. According to marketsandmarkets.com, the advanced analytics market will be worth $29.53 Billion by 2019
3. Wired.com points to a report by Glassdoor that the average salary of a data scientist is $118,709
4. Randstad reports that pay hikes in the analytics industry are 50% higher than IT
The Data Science with R is recommended for:
1. IT professionals looking for a career switch into data science and analytics
2. Software developers looking for a career switch into data science and analytics
3. Professionals working in data and business analytics
4. Graduates looking to build a career in analytics and data science
5. Anyone with a genuine interest in the data science field
6. Experienced professionals who would like to harness data science in their fields
Learn more at: https://www.simplilearn.com/
Abstract— A scientific investigation is created to examine the nonlinear unfaltering blended convection limit
layer stream and warmth exchange of an incompressible digression hyperbolicnon-Newtonian liquid from a
non-isothermal wedge in the nearness of attractive field. The changed preservation conditions are understood
numerically subject to physically fitting limit conditions utilizing a second-arrange precise verifiable limited
distinction Keller Box method. The numerical code is accepted with past studies. The impact of various rising
non-dimensional parameters, to be specific Weissenberg number (We), power law record (n), blended
convection parameter, weight angle parameter (m), Prandtl number (Pr), Biot number, attractive parameter
(M)and dimensionless extraneous direction on speed and temperature development in the limit layer
administration are inspected in subtle element. Moreover, the impacts of these parameters on surface warmth
exchange rate and nearby skin erosion are additionally examined. Approval with prior Newtonian studies is
introduced and amazing relationship accomplished. It is found that speed is lessened with expanding We,
while, temperature is increased. Expanding n improves speed yet diminishes temperature, a comparable
pattern was seen. An expanding M is found to decline speed however temperature increments.
Keywords— Magnetic parameter, Mixed Convection parameter, Non-Newtonian digression hyperbolic liquid,
power law index, Weissenberg number, Weight inclination parameter.
http://inarocket.com
Learn BEM fundamentals as fast as possible. What is BEM (Block, element, modifier), BEM syntax, how it works with a real example, etc.
Succession “Losers”: What Happens to Executives Passed Over for the CEO Job?
By David F. Larcker, Stephen A. Miles, and Brian Tayan
Stanford Closer Look Series
Overview:
Shareholders pay considerable attention to the choice of executive selected as the new CEO whenever a change in leadership takes place. However, without an inside look at the leading candidates to assume the CEO role, it is difficult for shareholders to tell whether the board has made the correct choice. In this Closer Look, we examine CEO succession events among the largest 100 companies over a ten-year period to determine what happens to the executives who were not selected (i.e., the “succession losers”) and how they perform relative to those who were selected (the “succession winners”).
We ask:
• Are the executives selected for the CEO role really better than those passed over?
• What are the implications for understanding the labor market for executive talent?
• Are differences in performance due to operating conditions or quality of available talent?
• Are boards better at identifying CEO talent than other research generally suggests?
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldabaux singapore
How can we take UX and Data Storytelling out of the tech context and use them to change the way government behaves?
Showcasing the truth is the highest goal of data storytelling. Because the design of a chart can affect the interpretation of data in a major way, one must wield visual tools with care and deliberation. Using quantitative facts to evoke an emotional response is best achieved with the combination of UX and data storytelling.
SSA-based hybrid forecasting models and applicationsjournalBEEI
This study attempted to combine SSA (Singular Spectrum Analysis) with other methods to improve the performance of forecasting model for time series with a complex pattern. This work discussed two modifications of TLSAR (Two-Level Seasonal Autoregressive) modeling by considering the SSA decomposition results, namely TLSNN (Two-Level Seasonal Neural Network) and TLCSNN (Two-Level Complex Seasonal Neural Network). TLSAR consisted of a linear trend, harmonic, and autoregressive component. In contrast, the two proposed hybrid approaches consisted of flexible trend function, harmonic, and neural networks. Trend and harmonic function were considered as the deterministic part identified based on SSA decomposition. Meanwhile, NN was intended to handle the nonlinearity relationship in the stochastic part. These two SSA-based hybrid models were contemplated to be more flexible than TLSAR and more applicable to the series with an intricate pattern. The experimental studies to the monthly accidental deaths in USA and daily electricity load Jawa-Bali showed that the proposed SSA-based hybrid model reduced RMSE for the testing data from that obtained by TLSAR model up to 95%.
A Singular Spectrum Analysis Technique to Electricity Consumption ForecastingIJERA Editor
Singular Spectrum Analysis (SSA) is a relatively new and powerful nonparametric tool for analyzing and forecasting economic data. SSA is capable of decomposing the main time series into independent components like trends, oscillatory manner and noise. This paper focuses on employing the performance of SSA approach to the monthly electricity consumption of the Middle Province in Gaza Strip\Palestine. The forecasting results are compared with the results of exponential smoothing state space (ETS) and ARIMA models. The three techniques do similarly well in forecasting process. However, SSA outperforms the ETS and ARIMA techniques according to forecasting error accuracy measures
A Singular Spectrum Analysis Technique to Electricity Consumption ForecastingIJERA Editor
Singular Spectrum Analysis (SSA) is a relatively new and powerful nonparametric tool for analyzing and forecasting economic data. SSA is capable of decomposing the main time series into independent components like trends, oscillatory manner and noise. This paper focuses on employing the performance of SSA approach to the monthly electricity consumption of the Middle Province in Gaza Strip\Palestine. The forecasting results are compared with the results of exponential smoothing state space (ETS) and ARIMA models. The three techniques do similarly well in forecasting process. However, SSA outperforms the ETS and ARIMA techniques according to forecasting error accuracy measures.
This study aims to detect the trend in seasonal rainfall of four rainy months i.e. June, July, August and
September. To determine the trend of rainfall, non-parametric Mann-Kendall test and non-parametric Sen’s
Slope estimator for determination of magnitude of trend was used. Linear regression analysis, which is a useful
parametric model used to developed functional relationships between variables was also applied to determine
trend of rainfall for the study area. From all statistical test results it was indicated that there was some change in
the trend of rainfall of the rainy months
Week 4 forecasting - time series - smoothing and decomposition - m.awaluddin.tMaling Senk
Forecasting - time series - smoothing and decomposition methods
Smoothing Method as Moving Averages and exponetial methods. The steps for decomposition methods and example of it. Case study for smothing methods in Single Exponential Smoothing, Double Exponential Smoothing and Triple Exponential Smoothing
A Study on Performance Analysis of Different Prediction Techniques in Predict...IJRES Journal
Time series data is a series of statistical data that is related to a specific instant or a specific time period. Here, the measurements are recorded on a regular basis such as monthly, quarterly and yearly. Most of the researchers have used one of the prediction techniques in prediction of time series data. But, they have not tested all prediction techniques on same data set. They have not even compared the performance of different prediction techniques on the same data set. In this research work, some well known prediction techniques have been applied in the same time series data set. The average error and residual analysis have been done for each and every applied technique. One technique has been selected based on the minimum average error and residual analysis among the all applied techniques. The residual analysis comprises of absolute residual, maximum residual, median of absolute residual, mean of absolute residual and standard deviation. To finalize the algorithm, same procedure has been applied on different time series data sets. Finally, one technique has been selected which has been given minimum error and minimum value of residual analysis in most cases.
The models, principles and steps of Bayesian time series analysis and forecasting have been established extensively during the past fifty years. In order to estimate parameters of an autoregressive (AR) model we develop Markov chain Monte Carlo (MCMC) schemes for inference of AR model. It is our interest to propose a new prior distribution placed directly on the AR parameters of the model. Thus, we revisit the stationarity conditions to determine a flexible prior for AR model parameters. A MCMC procedure is proposed to estimate coefficients of AR(p) model. In order to set Bayesian steps, we determined prior distribution with the purpose of applying MCMC. We advocate the use of prior distribution placed directly on parameters. We have proposed a set of sufficient stationarity conditions for autoregressive models of any lag order. In this thesis, a set of new stationarity conditions have been proposed for the AR model. We motivated the new methodology by considering the autoregressive model of AR(2) and AR(3). Additionally, through simulation we studied sufficiency and necessity of the proposed conditions of stationarity. The researcher, additionally draw parameter space of AR(3) model for stationary region of Barndorff-Nielsen and Schou (1973) and our new suggested condition. A new prior distribution has been proposed placed directly on the parameters of the AR(p) model. This is motivated by priors proposed for the AR(1), AR(2),..., AR(6), which take advantage of the range of the AR parameters. We then develop a Metropolis step within Gibbs sampling for estimation. This scheme is illustrated using simulated data, for the AR(2), AR(3) and AR(4) models and extended to models with higher lag order. The thesis compared the new proposed prior distribution with the prior distributions obtained from the correspondence relationship between partial autocorrelations and parameters discussed by Barndorff-Nielsen and Schou (1973).
Time Series Analysis - 1 | Time Series in R | Time Series Forecasting | Data ...Simplilearn
This Time Series Analysis (Part-1) in R presentation will help you understand what is time series, why time series, components of time series, when not to use time series, why does a time series have to be stationary, how to make a time series stationary and at the end, you will also see a use case where we will forecast car sales for 5th year using the given data. A time series is a sequence of data being recorded at specific time intervals. The past values are analyzed to forecast a future which is time-dependent. Compared to other forecast algorithms, with time series we deal with a single variable which is dependent on time. So, lets deep dive into this presentation and understand what is time series and how to implement time series using R.
Below topics are explained in this "Time Series in R Tutorial" -
1. Why time series?
2. What is time series?
3. Components of a time series
4. When not to use time series?
5. Why does a time series have to be stationary?
6. How to make a time series stationary?
7. Example: Forcast car sales for the 5th year
Become an expert in data analytics using the R programming language in this data science certification training course. You’ll master data exploration, data visualization, predictive analytics and descriptive analytics techniques with the R language. With this data science course, you’ll get hands-on practice on R CloudLab by implementing various real-life, industry-based projects in the domains of healthcare, retail, insurance, finance, airlines, music industry, and unemployment.
Why learn Data Science with R?
1. This course forms an ideal package for aspiring data analysts aspiring to build a successful career in analytics/data science. By the end of this training, participants will acquire a 360-degree overview of business analytics and R by mastering concepts like data exploration, data visualization, predictive analytics, etc
2. According to marketsandmarkets.com, the advanced analytics market will be worth $29.53 Billion by 2019
3. Wired.com points to a report by Glassdoor that the average salary of a data scientist is $118,709
4. Randstad reports that pay hikes in the analytics industry are 50% higher than IT
The Data Science with R is recommended for:
1. IT professionals looking for a career switch into data science and analytics
2. Software developers looking for a career switch into data science and analytics
3. Professionals working in data and business analytics
4. Graduates looking to build a career in analytics and data science
5. Anyone with a genuine interest in the data science field
6. Experienced professionals who would like to harness data science in their fields
Learn more at: https://www.simplilearn.com/
Abstract— A scientific investigation is created to examine the nonlinear unfaltering blended convection limit
layer stream and warmth exchange of an incompressible digression hyperbolicnon-Newtonian liquid from a
non-isothermal wedge in the nearness of attractive field. The changed preservation conditions are understood
numerically subject to physically fitting limit conditions utilizing a second-arrange precise verifiable limited
distinction Keller Box method. The numerical code is accepted with past studies. The impact of various rising
non-dimensional parameters, to be specific Weissenberg number (We), power law record (n), blended
convection parameter, weight angle parameter (m), Prandtl number (Pr), Biot number, attractive parameter
(M)and dimensionless extraneous direction on speed and temperature development in the limit layer
administration are inspected in subtle element. Moreover, the impacts of these parameters on surface warmth
exchange rate and nearby skin erosion are additionally examined. Approval with prior Newtonian studies is
introduced and amazing relationship accomplished. It is found that speed is lessened with expanding We,
while, temperature is increased. Expanding n improves speed yet diminishes temperature, a comparable
pattern was seen. An expanding M is found to decline speed however temperature increments.
Keywords— Magnetic parameter, Mixed Convection parameter, Non-Newtonian digression hyperbolic liquid,
power law index, Weissenberg number, Weight inclination parameter.
http://inarocket.com
Learn BEM fundamentals as fast as possible. What is BEM (Block, element, modifier), BEM syntax, how it works with a real example, etc.
Succession “Losers”: What Happens to Executives Passed Over for the CEO Job?
By David F. Larcker, Stephen A. Miles, and Brian Tayan
Stanford Closer Look Series
Overview:
Shareholders pay considerable attention to the choice of executive selected as the new CEO whenever a change in leadership takes place. However, without an inside look at the leading candidates to assume the CEO role, it is difficult for shareholders to tell whether the board has made the correct choice. In this Closer Look, we examine CEO succession events among the largest 100 companies over a ten-year period to determine what happens to the executives who were not selected (i.e., the “succession losers”) and how they perform relative to those who were selected (the “succession winners”).
We ask:
• Are the executives selected for the CEO role really better than those passed over?
• What are the implications for understanding the labor market for executive talent?
• Are differences in performance due to operating conditions or quality of available talent?
• Are boards better at identifying CEO talent than other research generally suggests?
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldabaux singapore
How can we take UX and Data Storytelling out of the tech context and use them to change the way government behaves?
Showcasing the truth is the highest goal of data storytelling. Because the design of a chart can affect the interpretation of data in a major way, one must wield visual tools with care and deliberation. Using quantitative facts to evoke an emotional response is best achieved with the combination of UX and data storytelling.
SSA-based hybrid forecasting models and applicationsjournalBEEI
This study attempted to combine SSA (Singular Spectrum Analysis) with other methods to improve the performance of forecasting model for time series with a complex pattern. This work discussed two modifications of TLSAR (Two-Level Seasonal Autoregressive) modeling by considering the SSA decomposition results, namely TLSNN (Two-Level Seasonal Neural Network) and TLCSNN (Two-Level Complex Seasonal Neural Network). TLSAR consisted of a linear trend, harmonic, and autoregressive component. In contrast, the two proposed hybrid approaches consisted of flexible trend function, harmonic, and neural networks. Trend and harmonic function were considered as the deterministic part identified based on SSA decomposition. Meanwhile, NN was intended to handle the nonlinearity relationship in the stochastic part. These two SSA-based hybrid models were contemplated to be more flexible than TLSAR and more applicable to the series with an intricate pattern. The experimental studies to the monthly accidental deaths in USA and daily electricity load Jawa-Bali showed that the proposed SSA-based hybrid model reduced RMSE for the testing data from that obtained by TLSAR model up to 95%.
A Singular Spectrum Analysis Technique to Electricity Consumption ForecastingIJERA Editor
Singular Spectrum Analysis (SSA) is a relatively new and powerful nonparametric tool for analyzing and forecasting economic data. SSA is capable of decomposing the main time series into independent components like trends, oscillatory manner and noise. This paper focuses on employing the performance of SSA approach to the monthly electricity consumption of the Middle Province in Gaza Strip\Palestine. The forecasting results are compared with the results of exponential smoothing state space (ETS) and ARIMA models. The three techniques do similarly well in forecasting process. However, SSA outperforms the ETS and ARIMA techniques according to forecasting error accuracy measures
A Singular Spectrum Analysis Technique to Electricity Consumption ForecastingIJERA Editor
Singular Spectrum Analysis (SSA) is a relatively new and powerful nonparametric tool for analyzing and forecasting economic data. SSA is capable of decomposing the main time series into independent components like trends, oscillatory manner and noise. This paper focuses on employing the performance of SSA approach to the monthly electricity consumption of the Middle Province in Gaza Strip\Palestine. The forecasting results are compared with the results of exponential smoothing state space (ETS) and ARIMA models. The three techniques do similarly well in forecasting process. However, SSA outperforms the ETS and ARIMA techniques according to forecasting error accuracy measures.
Hybrid model for forecasting space-time data with calendar variation effectsTELKOMNIKA JOURNAL
The aim of this research is to propose a new hybrid model, i.e. Generalized Space-Time
Autoregressive with Exogenous Variable and Neural Network (GSTARX-NN) model for forecasting
space-time data with calendar variation effect. GSTARX model represented as a linear component with
exogenous variable particularly an effect of calendar variation, such as Eid Fitr. Whereas, NN was a model
for handling a nonlinear component. There were two studies conducted in this research, i.e. simulation
studies and applications on monthly inflow and outflow currency data in Bank Indonesia at East Java
region. The simulation study showed that the hybrid GSTARX-NN model could capture well the data
patterns, i.e. trend, seasonal, calendar variation, and both linear and nonlinear noise series. Moreover,
based on RMSE at testing dataset, the results of application study on inflow and outflow data showed that
the hybrid GSTARX-NN models tend to give more accurate forecast than VARX and GSTARX models.
These results in line with the third M3 forecasting competition conclusion that stated hybrid or combining
models, in average, yielded better forecast than individual models.
Linear Regression Model Fitting and Implication to Self Similar Behavior Traf...IOSRjournaljce
We present a simple linear regression model fit in the direction of self-similarity behavior of internet user’s arrival data pattern. It has been reported that Internet traffic exhibits self-similarity. Motivated by this fact, real time internet users arrival patterns considered as traffic and the results carried out and proven that it has the self-similar nature by various Hurst index methods. The present study provides a mathematical model equation in terms linear regression as a tool to predict the arrival pattern of Internet users data at web centers. Numerical results, analysis discussed and presented here plays a significant role in improvement of the services and forecasting analysis of arrival protocols at web centers in the view of quality of service (QOS).
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
1. Salahaddin A.Ahmed et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.280-292
RESEARCH ARTICLE
www.ijera.com
OPEN ACCESS
Box –Jenkins Models For Forecasting The Daily Degrees Of
Temperature In Sulaimani City.
Samira Muhammad Salh1, Salahaddin A.Ahmed2
(1)
(2)
University of Sulaimani- College of Administration and Economic / Statistic Department Sulaimani / Iraq.
University of Sulaimani- Faculty of Science and Science Education / Physics Department Sulaimani / Iraq.
Abstract:
The Auto-regressive model in the time series is regarded one of the statistical articles which is more
used because it gives us a simple method to limit the relation between variables time series. More over it is one
of Box –Jenkins models to limit the time series in the forecasting the value of phenomenon in the future so that
study aims for the practical analysis studying for the auto-regressive models in the time series, through one of
Box –Jenkins models for forecasting the daily degrees of temperature in Sulaimani city for the year (2012Sept.2013) and then for building a sample in the way of special data in the degrees of temperature and its using
in the calculating the future forecasting . the style which is used is the descriptive and analyzing by the help of
data that is dealt with statistically and which is collected from the official resources To reach his mentioned aim
, the discussion of the following items has been done by the theoretical part which includes the idea of time
series and its quality and the autocorrelation and Box –Jenkins and then the practical part which includes the
statistical analysis for the data and the discussion of the theoretical part, so they reached to a lot of conclusions
as it had come in the practical study for building autoregressive models of time series as the mode was very
suitable is the auto-regressive model and model moving average by the degree (1,1,1).
Keywords: Box –Jenkins models, Time series, Auto-regressive model, Moving Average models, ARMA
models, Sulaimani city.
I. Introduction
That there is a lot of studies and research economic and administrative, which focused on expectations
of future because of its significant impact in this field well as study the time series for many phenomena and
know the nature of the changes that will come out and what will happen with the change in the future and for a
number of years and light of what happened to her in the past as researchers presented several studies to build
models for time series and all amosmeh, and the mean time series study analyzed to its factors influencing,
public Kalatjah, and seasonal changes and episodic and others. And is the regression model of self-time series
and one of the statistical tools most widely used because it gives us an easy way to determine the relationship
between the variables of the time series. Can express this relationship in the form of an equation as that one
models Box _ Jenkins for time series analysis and forecasting values that appear in the future and his practical
applications in the fields of economic and administrative and weather forecasters, for weather forecasting and
measuring amounts of rain and temperatures that had significant effects in the areas of agricultural, industrial
and marine navigation and others. Also for these models is particularly important in planning and forecasting
future.
II. Objective of this research
The research aims to provide an analytical study applied to the regression model of autocorrelationstime series by the model Box – Jenkins for forecasting the daily temperature in the city of Sulaimani in (2012)
and then for building a model of data on temperature and to be used in the calculation of future forecasts.
III. Research Methodology
This is applied research, which is trying to build an appropriate (suitable) model for the purpose of
forecasting temperatures in the city of Sulaimani in 2012 and to be used for forecasting.
IV. The Theatrical part
The Concept of Types of Time Series
www.ijera.com
280 | P a g e
2. Salahaddin A.Ahmed et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.280-292
www.ijera.com
A time series is a collection of observations of well-defined data items obtained through repeated
measurements over time. There are two types of time series called stationary and non-stationary time series.
Time series is a set of observations {xt }, each one being recorded at a specific time t.
A time series model for the observed data {xt} is a specification of the joint distributions (or possibly
only the means and covariance) of a sequence of random variables {Xt} of which {xt} is postulated to be a
realization.
In reality, we can only observe the time series at a finite number of times, and in that case the
underlying sequence of random variables (X1,X2, . . . ,Xn) is just an n-dimensional random variable (or random
vector). Often, however, it is convenient to allow the number of observations to be infinite. In that case {Xt, t =
1, 2, . . .} is called a stochastic process. In order to specify its statistical properties we then need to `consider all
n-dimensional distributions:
P[X1 ≤ x1, . . . ,Xn ≤ xn]
for all n = 1, 2, . . . ,
A process {X t , t Z} is said to be an independent and identically distribution (IID) noise with mean (0) and
variance ( ), written {Xt} » IID(0, ),
if the random variables Xt are independent and identically distributed with
2
2
E(Xt) = 0 and Var(Xt) =
Then the mean is
(w t )
2.
,and the variance:
var(w t ) (w t )2
k cov(w t ,w t k ) (w t )(w t k )
If we assume that we have the following time series (
of the time series (
wt
(c k , ,w )
2
w
) and (
) each estimates variables
N
w
w
t 1
t
N
N
2
w
(w
t 1
t
Ck
w )2
N 1
N
w
t 1
t
w
w 1 ,w 2 ,w 3 ,...,w n
w t k
N 1
w
(1)
) which represent the values
( k , var(w t ), )
respectively. Then
......(2)
, k 1, 2,3,...
We can distinguish between two types of time series through the values of correlation coefficients
between the observations.
V. Models for time Series
Let us begin this section with the following wonderful quotation:
―Experience with real-world data, however, soon convinces one that both stationary and Gaussianity are fairy
tales invented for the amusement of undergraduates.‖ (Thomson 1994).
Bearing this in mind, stationary models form the basis for a huge proportion of time series analysis
methods. As it is true for a great deal of mathematics, we can begin with very simple building blocks and then
building structures of increasing complexity. In time series analysis, the basic building block is the purely
random process.
Loosely speaking a stationary process is one whose statistical properties do not change over time.
More formally, a strictly stationary stochastic process is one where given t1, . . . , tℓ the joint statistical
distribution of Xt1 , . . . ,Xtℓ is the same as the joint statistical distribution of Xt1+τ , . . . ,Xtℓ+τ for all ℓ and τ .
This is an extremely strong definition: it means that all moments of all degrees : expectations, variances, third
order and higher of the process, any where are the same. It also means that the joint distribution of (X t,Xs) is the
same as (Xt+r,Xs+r) and hence cannot depend on s or t but only on the distance between s and t, i.e. s − t.
www.ijera.com
281 | P a g e
3. Salahaddin A.Ahmed et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.280-292
www.ijera.com
Since the definition of strict stationary is generally too strict for everyday life, a weaker definition of
second order or weak stationary is usually used. Weak stationary means that the mean and the variance of a
stochastic process do not depend on t (that is they are constant) and the auto-covariance between Xt and Xt+τ
only can depend on the lag τ (τ is an integer, the quantities also need to be finite). Hence for stationary processes
{Xt} the definition of auto-covariance is:
γ(τ ) = cov(Xt,Xt+τ ),
for integers τ . It is vital to remember that, for the real world, the auto-covariance of a stationary process is a
model, albeit a useful one. Many actual processes are not stationary as we will see in the next section. Having
said this, much fun can be given with stationary stochastic processes!
One also routinely comes across the autocorrelation of a process which is merely a normalized version
of the auto-covariance to values between −1 and 1 and commonly uses the Greek letter ρ as its notation:
( )
( )
(0)
for integers τ and where
γ(0) = cov(Xt,Xt) = var(Xt).
VI. Time Series Data
A time series is a set of statistics, usually collected at regular intervals. Time series data occur naturally
in many application areas.
• Economics e.g., monthly data for unemployment, hospital admissions, etc.
• Finance e.g., daily exchange rate, a share price, etc.
• Environmental e.g., daily rainfall, air quality readings.
• Medicine e.g., ECG brain wave activity every 2−8 sec.
The methods of time series analysis pre-date those for general stochastic processes and Markov Chains.
The aims of time series analysis are to describe and summarize time series data, to fit low-dimensional models,
and to make forecasts.
We write our real-valued series of observations as.. ,X−2,X−1,X0,X1,X2, ….., a doubly infinite
sequence of real-valued random variables indexed by Z.
VII.
Trend, seasonality, cycles and residuals
One simple method of describing a series is that of classical decomposition. The notion is that the
series can be decomposed into four elements:
1) Trend (Tt) — long term movements in the mean;
2) Seasonal effects (It) — cyclical fluctuations related to the calendar;
3) Cycles (Ct) — other cyclical fluctuations (such as a business cycles);
4) Residuals (Et) — other random or systematic fluctuations.
The idea is to create separate models for these four elements and then to combine them, either
additively
Xt = Tt + It + Ct + Et
(3)
or multiplicatively
Xt = Tt · It · Ct · Et .
(4)
Stationary processes
1. A sequence {Xt, t ∈ Z} is strongly stationary or strictly stationary if
(Xt1, . . . ,Xtk)D=(Xt1+h, . . . ,Xtk+h)
for all sets of time points t1, . . . , tk and integer h.
2. A sequence is weakly stationary, or second order stationary if
(a) E( Xt) = μ, and
(b) cov(Xt,Xt+k) = γk,
Where μ is constant and γk is independent of t.
3. The sequence {γk, k ∈ Z} is called the auto-covariance function.
www.ijera.com
282 | P a g e
4. Salahaddin A.Ahmed et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.280-292
4. We also define
k
www.ijera.com
k
corr (X t , X t k )
0
and call { k , k Z } the auto-correlation function (ACF).
Models for time series: (MA, AR and ARMA models)
This section considers some basic probability models extensively used for modeling time series.
Moving Average models: (MA)
The moving average process of order q is denoted MA(q) and defined by
q
X t s t s
s 0
where θ1, . . . , θq are fixed constants, θ0 = 1, and { t } is a sequence of independent
(or uncorrelated) random variables with mean 0 and variance σ2. It is clear from the definition that this is the
second order stationary
0
k 2qk
s s k
s 0
k q
k q
(5)
We remark that two moving average processes can have the same auto-correlation function. Let
Xt t t 1
Both have
1
and Xt t 1 t 1
, 0 k 1
1 2 k
(6)
However, the first gives
t X t t 1 X t (X t 1 t 2 ) X t X t 1 2 X t 2 ...
(7)
This is only valid for |θ| < 1, so-called invertible process. No two invertible processes have the same
autocorrelation function.
Probably the next simplest model is that constructed by simple linear combinations of lagged
elements of a purely random process, { t } with ( t ) 0 .
A moving average process {Xt} of order q is defined by :
q
X t 0 t 1 t 1 ... q t q i t i
(8)
i 0
and the shorthand notation is MA(q). Usually with a newly defined process
it is of interest to discover its statistical properties. For an MA(q) process the mean is simple to find
(since the expectation of a sum is the sum of the expectations):
q
q
( X t ) i t i i t i 0
i 0
i 0
(9)
Because ( t ) 0 for any r.
A similar argument can be applied for the variance calculation:
q
q
q
var(X t ) var i t i i2 t i 2 i2
i 0
i 0
i 0
2
sin ce var( r ) for all t .
www.ijera.com
(10)
283 | P a g e
5. Salahaddin A.Ahmed et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.280-292
www.ijera.com
The auto-covariance is slightly trickier to work out.
(t ) cov(X t , X t )
cov( i t i , j t j )
i 0
j 0
q
q
i j cov( t i , t j )
i 0 j 0
q
q
2
i j j ,i
i 0 j 0
where u , is the Kronecker delta which is 1 for u and zero otherwise
q
q
(This arises because of the independence of the
(11)
values. Thus since j ,i is involved only terms in the j
sum where j i survive).
Hence continuing the summation gives
q
( ) i i
i 0
In other words, the
j becomes j and also the index of summation ranges only up to
q since the largest i occurs for i q .
The formula for the auto-covariance of an MA(q) process is fascinating:
It is effectively the convolution of { i } with itself (an ―auto convolution‖).
One of the most important features of an MA(q) auto covariance is that it is zero for q . The
reason for its importance is that when one is confronted with an actual time series x1, . . . , xn., one can compute
the sample auto covariance given by:
n
c ( ) x i x , x i x
(12)
i 1
for τ = 0, . . . , n − 1. The sample autocorrelation can be computed as
r(τ ) = c(τ )/c(0). If, when one computes the sample auto covariance, it cuts off at a certain lag q, i.e.
it is effectively zero for lags of q + 1 or higher, then one can postulate the MA(q) model in (11.5) as the
underlying probability model. There are other checks and tests that one can make but comparison of the sample
auto covariance with reference values, such as the model auto covariance given in (11.12), is the first major step
in the model identification.
Also, at this point one should question what one means by ―effectively zero‖. The sample auto
covariance is an empirical statistic calculated from the random sample at hand. If more data in the time series
were collected, or another sample stretch used then the sample auto covariance would be different (although for
long samples and stationary series the probability of a large difference should be very small). Hence, sample
auto covariance and autocorrelations are necessarily random quantities and hence ―is effectively zero‖ translates
into a statistical hypothesis test on whether the true autocorrelation is zero or not. Finally, whilst we are on the
topic of sample auto covariance notice that at the extremes of the range of τ :
n
c (0) x i x
2
i 0
Te sample variance will be:
c (n 1) x 1x n
The lesson here is that c(0), which is an estimate of γ(0), is based on n pieces of information, where a c(n −
1), an estimate of γ(n − 1), is only based on 1 piece of information. Hence, it is easy to see that estimates of
sample auto covariance for higher lags are more unreliable for those of smaller lags.
VIII.
Autoregressive processes: (AR)
The autoregressive process of order p is denoted AR(p), and defined
www.ijera.com
284 | P a g e
6. Salahaddin A.Ahmed et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.280-292
www.ijera.com
p
X t r X t r t
r 1
where
1 , 2 ,..., r
are fixed constants and { t } is a sequence of independent (or uncorrelated)
random variables with mean 0 and variance .
The AR(1) process is defined by
2
X t X t 1 t
To find its auto covariance function we make successive substitutions, to get
X t t 1 ( t 1 1( t 2 ...)) t 1 t 1 12 t 2 ...
(13)
The fact that {Xt} is the second stationary follows from the observation that
E(Xt) = 0 and that the auto covariance function can be calculated as follows:
0 ( t 1 t 1 ...) (1 ...)
2
2
1 1
2 k
1
k ( 12 t r 1s t k s )
1 12
r 0
s 0
2
1 t 2
2
2
1
4
1
2
(14)
There is an easier way to obtain these results. That is to multiply equation (13) by Xt−k
expected value, to give
and taking the
(X t X t k ) (1X t 1X t k ) (t X t k )
thus k 1 k 1 , k 1, 2,...
Similarly, squaring (13) and taking the expected value gives
(X t2 ) 1(X t21 ) 21( X t 1 t ) ( t2 ) 12( X t21 ) 0 2
2
and so 0
2
(1 1 )
(15)
More generally, the AR(p) process is defined as
X t 1X t 1 2 X t 2 ... p X t p t
Again, the autocorrelation function can be found by multiplying (1.3) by Xt−k, taking the
expected value and dividing by γ0, thus producing the Yule-Walker equations
k 1 k 1 2 k 2 ... p k p , k 1, 2,...
These are linear recurrence relations, with general solution of the form
k C11k ... C p pk
where 1 , 2 ,..., p are the roots of
(16)
p 1 p 1 2 p 2 .... p 0
and C1, . . . ,Cp are determined by 0 1 and the equations for k = 1, . . . , p − 1. It is natural to require
k 0 as k , in which case the roots must lie inside the unit circle, that is, |ωi| < 1. Thus there is a
restriction on the values of υ1, . . . , υp that can be chosen.
IX. ARMA Models
Both AR and MA models express different kinds of stochastic dependence. AR processes encapsulate a
Markov-like quality where the future depends on the past, whereas MA processes combine elements of
www.ijera.com
285 | P a g e
7. Salahaddin A.Ahmed et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.280-292
www.ijera.com
randomness from the past using a moving window. An obvious step is to combine both types of behaviors into
an ARMA(p, q) model which is obtained by a simple concatenation. The process autoregressive moving average
process, ARMA(p, q), is defined by:
p
q
r 1
s 0
X t r X t r s t s
Where again {
(17)
t } is white noise. This process is stationary for appropriate υ, θ.
Also if the original process {Yt} is not stationary, we can look at the first order difference process
X t Y t Y t Y t 1
or the second order differences
X t 2Y t (Y )t Y t 2Y t 1 Y t 2
(18)
and so on. If we ever find that the differential process is a stationary process we can look for a ARMA model
of that.
The process {Yt} is said to be an autoregressive integrated moving average process, ARIMA(p, d, q), if
X t dY t is an ARMA(p, q) process. AR, MA, ARMA and ARIMA processes can be used to model many
time series. A key tool in identifying a model is an estimate of the auto covariance function. We can estimate the
mean by
T
X
X
t 1
t
, the auto covariance by
T
X
T
c k ˆk
t k 1
t
X
X
t k
.
T
ˆ
and the autocorrelation by rk k
X
(19)
ˆk
ˆ0
X. Application part
In order to build an appropriate model for time series during the search and through the model of boxJenkins to predict the daily temperatures in the city of Sulaimani, data has been collected from the recording
data of the meteorological directorate of Sulaimani, for the one year period (2012), as illustrated in the table (1)
in appendix. A statistical analysis of data was tested to see how they need to convert and complete stability, in
contrast, then to obtain a homogeneous and appropriate data. According to that, mean and range of the data has
been taken, and then the time series is divided into 12 sub-groups, including 12 observations for each, and then
calculating the mean and the range of each sub-group, as shown in the table (1). This is also followed by a figure
(1) of the mean against range, which can observe that the values are scattered in a random manner suggesting
the extent and independence of the average and thus a homogeneous time series values and no need for a
conversion process. In order to know the stationary of the time series, we found the values of auto-correlation
coefficients, as it is shown in (table 2) in the Appendix that time series is non-stationary ,so the first difference is
taken for the purpose to converting stationary time series, and to see the significance of the correlation
coefficients in several periods of time, as well as to see the series containing seasonal effects which is by
finding correlation coefficients and partial correlation coefficients . In order to diagnose appropriate model
through the existing values in the tables (2 and 3) respectively, all the models are tested, the result is that the
appropriate model is ARIMA (1,1,1) and the following form W t W t 1 at 1 at .
Table (1) shows the mean and the range of daily temperatures (2012-Sept. 2013) for Sulaimani
by sub-groups, each of with size 12, Mean series =20.68, and Variance=104.91.
N
range
mean
N
range
mean
N
range
mean
1
2
3
4
3.4
6.06
6.55
3.75
www.ijera.com
7.792
4.604
5.483
7.579
20
21
22
23
3.6
4.2
3.4
4.6
33.613
33.241
31.585
29.837
40
41
42
43
5.45
7.05
10.25
8.05
19.462
17.312
23.025
21.420
286 | P a g e
8. Salahaddin A.Ahmed et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.280-292
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
9.5
9.7
12.6
9.35
6.65
7.2
7.3
3.4
7.5
7.35
7.35
5.55
2.2
3.85
3.75
6.795
5.862
11.058
14.429
19.183
20.458
23.962
7.792
25.263
26.095
31.337
31.383
30.662
34.504
35.195
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
3.55
5.4
9.75
2.35
7.1
3.15
4.35
6.65
4.25
6.25
4.05
8.95
3.95
7.65
11.45
27.808
25.895
21.854
19.145
15.408
12.800
10.450
8.660
8.660
4.004
10.404
9.217
10.420
11.943
14.729
44
45
46
47
48
49
50
51
52
53
8.25
4.2
3
3.6
4.9
5.15
4.86
5.65
7
5.45
www.ijera.com
25.425
27.541
30.850
32.208
34.104
33.083
32.637
33.241
31.716
19.462
In this model, the parameters that have been estimated make minimum mean square error (3.221)
parameters
and
the
model
is
as
follows
0.193, 0.249 ,
and
W t W t 1 0.193 t 1 0.249at 1 at . Through using this model, we can find the estimate and the
W
residual values for January of 2013 .as shown in (table 3) in appendix. The model was also used to forecast
efficiently, as a result the auto-correlation coefficient for residual were found as shown in table (4) in appendix.
The same process was applied on other 15 days of September 2013.As illustrated in the figure (2).
K
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
rk
0.048
-0.195
-0.079
-0.140
-0.089
-0.008
-0.039
-0.118
-0.070
-0.136
-0.001
0.032
-0.027
-0.046
0.048
-0.018
0.120
-0.083
0.006
-0.044
0.010
0.078
-0.033
0.031
0.023
0.106
0.049
0.063
0.001
0.030
-0.044
www.ijera.com
Table (2) shows auto correlation coefficient after taking the first difference.
k
rk
k
rk
k
41
0.068
81
0.018
121
42
-0.054
82
-0.025
122
43
-0.021
83
0.065
123
44
0.004
84
0.016
124
45
0.021
85
0.004
125
46
0.024
86
0.085
126
47
0.069
87
-0.049
127
48
0.004
88
-0.039
128
49
0.043
89
0.007
129
50
0.015
90
-0.028
130
51
0.083
91
-0.073
131
52
-0.028
92
-0.001
132
53
-0.006
93
-0.020
133
54
-0.003
94
-0.028
134
55
-0.013
95
-0.033
135
56
0.042
96
0.009
136
57
0.031
97
-0.015
137
58
-0.004
98
0.004
138
59
0.070
99
-0.009
139
60
-0.007
100
0.011
140
61
0.004
101
-0.016
141
62
-0.057
102
0.008
142
63
-0.044
103
-0.002
143
64
-0.006
104
-0.002
144
65
-0.035
105
0.053
145
66
0.006
106
-0.020
146
67
0.009
107
-0.075
147
68
-0.099
108
0.020
148
69
-0.035
109
0.036
149
70
-0.090
110
-0.009
150
71
-0.016
111
-0.043
151
rk
-0.021
-0.015
-0.001
-0.058
0.041
-0.003
-0.041
0.088
0.008
-0.017
-0.041
0.002
0.024
-0.038
0.006
0.069
0.048
-0.041
-0.009
-0.029
0.000
-0.036
-0.023
0.019
-0.058
-0.057
-0.006
0.021
-0.101
-0.004
-0.005
287 | P a g e
10. Salahaddin A.Ahmed et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.280-292
time
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
actual data
29.3
27.8
27.5
26.7
28
27.3
28.2
23.9
21.8
24.6
32.4
33.2
32
31.4
33.1
forecast data
28.31
25.9
26.1
24.4
23.8
26.9
27.3
20.2
19.5
20.2
29.32
28.46
30.23
31.22
31.29
www.ijera.com
error
0.99
1.9
1.4
2.3
4.2
0.4
0.9
3.7
2.3
4.4
3.08
4.74
1.77
0.18
1.81
Figure (1) Mean and Range
35
Actual and Forcast Data
30
25
20
15
forecast data
actual data
10
5
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15
Days
Figure (2) represent actual and forecast data for 15 days of September 2013
www.ijera.com
289 | P a g e
11. Salahaddin A.Ahmed et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.280-292
www.ijera.com
According to what is stated in this applied study, which is to build an auto-regression model for time
series through the box-Jenkins models, the researcher has reached a number of conclusions and same
recommended points.
XI. conclusions
1) Through the values of auto-correlation coefficients in (Table 2) in Appendix, it has been observed that daily
temperature in the city of Sulaimani makes up the non-stationary time series.
2)According to the values that have been found from the auto-correlation coefficients and partial autocorrelation coefficients based on the table(2 and 3) respectively, in appendix, it has been observed that the
appropriate model is the model ARIMA(1,1,1
), which can be written in the following form:
W t W t 1 at 1 at .
3)Through the Figure 2, we have found that the model mentioned in the previous paragraph has given good
estimates close to the actual values.
Recommendations:
Through
the
reported
findings,
the
researcher
recommends:
1) The Meteorological Department and the relevant authorities can depend on different statistical models to
forecast weather.
2) Different box-Jenkins models can be used to predict daily temperature in different regions of Kurdistan.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
Anderson, "Forecasting "new York, North Holland. (1980).
2)Box,G.E. and Jenkins, " Time series Analysis Forecasting and control "san Francisco holdenday
,1979.
Box, G.E.P., and D.A. Pierce (1970). ―Distribution of Residual Autocorrelations in Autoregressiveintegrated Moving Average Time Series Models,‖ Journal of the American Statistical Association, 65,
1509-1526.
Granger, C.W.J. and M.J. Morris (1976). ―Time Series Modeling and Interpretation,‖ Journal of the
Royal Statistical Society, Series A, 139, 246-257.
Hamilton, J.D. (1994). Time Series Analysis. Princeton University Press, New Jersey.
Harvey, A.C. (1993). Time Series Models, Second Edition. MIT Press, Massachusetts.
Hayashi, F. (2000).Econometrics. Princeton University Press, New Jersey.
Johnston, Jack & John DiNardo.1997.Econometric Methods. New York McGraw Hill,Inc.
L jung, T. and G.E.P. Box (1979). ―The Likelihood Function for a Stationary Autoregressive Moving
Average Process,‖ Biometrika,66, 265-270.
Wei, William W.S. "Time series Analysis "2nd (2006), new York .united states of America. Holden
Day, San Francisco.
Appendices
Table (1) Daily temperatures in the city of Sulaimani, (2012 Sept.2013).
6.3
7.6 13.2 25.3 28.3 36.5 34.4 24.8 15.8 10.4 8.2
8.3
7.6 12.7 25.5 28.4 37.0 33.1 26.9 14.2 10.3
7.8
8.0
8.1 13.1 25.2 31.3 34.9 33.3 25.9 13.5
8.5
9.4
7.4
8.6 13.2 23.5 32.4 35.8 30.8 24.3 12.3
9.2
8.1
7.1
9.9 12.9 20.5 33.2 35.7 30.1 25.8 13.5
7.8
6.2
7.0
6.8 15.0 22.2 32.9 35.2 31.3 26.8 13.6
9.5
8.8
7.8
7.5 10.5 22.5 34.5 35.2 30.0 24.2 14.3
6.6
7.2
8.2 11.4 22.0 35.8 36.1 29.0 23.7 15.2
6.9
9.2
4.3 13.1 23.0 33.8 37.4 32.3 23.4 13.8
6.4
8.5
3.4 11.4 23.9 29.8 35.6 33.4 24.7 13.5
1.0
7.2
4.6 10.6 26.3 28.3 34.3 33.5 26.2 11.9
0.7
9.7
3.2 13.7 27.8 28.9 33.6 31.3 23.8 12.2
1.2
7.1
6.4 17.3 27.1 31.8 36.1 28.9 24.9 11.6
3.0
5.5
5.6 18.3 28.3 32.0 34.1 31.2 23.9 11.1
4.0
6.1
6.9 19.3 26.3 32.2 34.1 31.8 21.3 11.3
4.5
www.ijera.com
14.6
14.4
10.6
8.3
10.1
12.3
11.9
10.9
10.3
10.3
9.0
10.0
10.8
10.9
11.7
13.5
17.2
10.3
10.6
14.2
15.3
15.8
14.6
15.2
18.9
20.9
21.4
21.0
19.6
15.5
14.0
15.8
10.4
9.5
12.1
13.8
13.8
12.7
12.7
14.6
14.9
15.7
15.9
15.3
13.6
29.3
28.0
28.8
30.2
34.3
33.1
30.7
29.9
30.3
29.8
29.9
31.2
30.5
31.8
33.0
33.1
34.3
35.2
34.7
32.5
33.3
33.0
35.3
31.2
30.1
30.6
32.5
31.2
31.2
35.5
290 | P a g e
34.0
33.2
31.3