One–day wave forecasts based on artificial neural networks

754 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
754
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
17
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

One–day wave forecasts based on artificial neural networks

  1. 1. ONE – DAY WAVE FORECASTS BASED ON ARTIFICIAL NEURAL NETWORKS S.N. Londhe And Vijay Panchang (Manuscript received 31 May, 2005, in final form 24 March, 2006)
  2. 2. OBJECTIVE <ul><li>There is a significant increase in the number of instruments deployed in coastal areas to make hydrodynamic measurements (provide hourly measurements of water levels, waves and other oceanographic parameters in near real time) mainly to develop “ocean observing systems”. </li></ul><ul><li>This paper attempts to enhance the value of the expanding base of measurements by providing a forecast through the use of ANN. </li></ul><ul><li>Forecasts can be obtained through the continuous operation of numerical models – Princeton Ocean Model for water levels and velocities, Simulating Waves Nearshore (SWAN)/WAVEWATCH wave model. </li></ul>
  3. 3. OBJECTIVE <ul><li>However, in some regions it is not feasible to use such models due to the unavailability of (future) forecasting functions such as offshore boundary conditions and wind stresses at adequate resolution on the model domain. </li></ul><ul><li>In this paper, the issue of forecasting significant wave heights for the next 24 hours at the location of a buoy where measurements are available is addressed. </li></ul>
  4. 4. REFERENCE TO PREVIOUS WORKS <ul><li>Deo and Naidu (1999) used ANNs along with the measurements from a wave buoy moored off the east coast of India to correlate pairs of wave height data and to estimate a future value based on current value. </li></ul><ul><li>Agrawal and Deo (2002) correlated three pieces of data at time steps t-∆t, t and t+∆t. While their predictions yielded a very high correlation relative to the targets in validation tests for ∆t=3 hours, they used averages for ∆t>3 hours; for example. The average of the current day’s wave heights was used to predict the next day’s average wave heights. </li></ul>
  5. 5. REFERENCE TO PREVIOUS WORKS <ul><li>Both studies were based on data from a single location and covered a limited duration (16 months). The data used for validation covered only 3 months and displayed extremely limited wave height variability (less then 1.4m). </li></ul><ul><li>Makarynskky ‘s study (2004) off the Irish Coast included much larger waves but short datasets. Medina (2005) however noted that Makarynskyy’s modeling strategies can be problematic owing to their susceptibility to overfitting and correlation between too many values. </li></ul>
  6. 6. BASIC OVERVIEW <ul><li>Data from 6 buoys in 3 areas (Gulf of Mexico, Gulf of Maine and Gulf of Alaska) where the wave height environments are statistically different. </li></ul><ul><li>The lengths of the data used are much greater than those in previous efforts, as long as 12 years in some cases. </li></ul><ul><li>At some of these locations, extreme wave events with recurrence intervals of the order of 100 years or more are included. </li></ul><ul><li>The possibility of obtaining time specific forecasts rather than averages over long time intervals in order to enhance the practical usefulness of the predictions is explored. </li></ul>
  7. 7. BASIC OVERVIEW <ul><li>To eliminate numerical problems associated with overfitting data, a strategy was devised to recognize wave height patterns that permits a judicious selection of data points to cover a period typical of a storm length without creating enormously large matrices. </li></ul><ul><li>In addition to the usual validation with historical data, the performance of these models in real time predictions starting from 1 st March, 2005 was tested. </li></ul>
  8. 8. STUDY AREA
  9. 9. STUDY AREA
  10. 10. DATA
  11. 11. DATA <ul><li>H 2% is the 50 year wave heights or it is the wave heights corresponding to a 2% chance of occurrence in any given year. </li></ul><ul><li>46082 – as data for only 3 years was available, H 2% couldn’t be estimated. </li></ul><ul><li>44007 and 44013 – In the Gulf of Maine had data for about 12 years. </li></ul><ul><li>The distance of the buoys from the coast varies from 22 to 156 km and accordingly H 2% varies between 5.53 to 9.56m. </li></ul>
  12. 12. DATA
  13. 13. DATA <ul><li>The percentage of small waves heights (less than 3 m) is very high as compared to that of very large wave heights (larger than 9 m). In general, the percentage of wave heights greater than 3 m is fairly small. </li></ul>
  14. 14. ANN MODELING STRATEGY <ul><li>Three layered “feed forward back propagation” networks were developed for each location for forecasting the wave heights for a maximum lead time of 24 hours. </li></ul><ul><li>It was assumed that the wave height information over 24 hours for which the forecast is made is a function of the wave heights over the previous 7 days, which in turn are assumed to have a decreasing influence as the measurements become more remote relative to the onset of the forecast. </li></ul>
  15. 15. ANN MODELING STRATEGY <ul><li>The input layer had 28 neurons representing 7 days of wave heights (at 12 and 24 hours on day 7; at 8, 16 and 24 hours on days 6 and 5; at 6, 12, 18 and 24 hours on days 4 and 3; at 4, 8, 12, 16, 20 and 24 hours on days 2 and 1). </li></ul><ul><li>The output layer had 4 neurons pertaining to wave heights predicted on the 8 th day at t=6, 12, 18 and 24 hours. </li></ul><ul><li>Sets of 32 data pieces (28 input wave heights and 4 output wave heights) can be created at increments of 1 hour. </li></ul><ul><li>This creates 8000 data sets per year. </li></ul>
  16. 16. ANN MODELING STRATEGY <ul><li>As the length of data available for the first buoy is very less as compared to the others, the datasets were constructed on a daily basis rather than an hourly basis. This made 358 sets for one year. The results obtained by both methods were practically indistinguishable. </li></ul>
  17. 17. NETWORK ARCHITECTURE <ul><li>The data was normalized in the range from 0 to 1. </li></ul><ul><li>A log-sigmoidal transfer function was used in between both the input layer and the output layer. </li></ul><ul><li>The conjugate gradient algorithm with the Fletcher-Reeves update (CGF) was used. </li></ul><ul><li>The number of epochs varied from 50 to 500; 500 producing the maximum correlation. </li></ul>
  18. 18. RESULTS AND DISCUSSION <ul><li>All six networks were tested using data for 1 year, generally the last year (ie 2004) except for buoy 46060 where data for 2003 was used as 2004 had lot of missing values. </li></ul><ul><li>Predictions were made for the period between 8 th Jan and 31 st Dec at all locations except at buoy 42040 where measurements after 16 th Sep 2004 were not available. </li></ul>
  19. 24. <ul><li>The highest peaks are sometimes substantially underpredicted. </li></ul><ul><li>Maybe due to the phenomenon of wave development being misrepresented by the chosen pattern of data arrangement. </li></ul><ul><li>Or, it is due to the relative dominance of the smaller wave heights in the datasets, which results in their greater influence during the training of the ANN and hence better predictions for low wave heights. </li></ul>
  20. 26. REAL TIME PREDICTION <ul><li>The models were run continuously from 1 st April, 2005 using the previous 7 days of wave height measurements. </li></ul><ul><li>Predictions were made from 1 st to 30 th April, 2005. </li></ul><ul><li>The 6 hour forecasts are observed to be highly reliable. </li></ul><ul><li>The results display the same trend as noted earlier. </li></ul>
  21. 28. CONCLUSION <ul><li>Very large correlation values for smaller lead times of 6 and 12 hours. </li></ul><ul><li>For larger lead times, the trend was correctly forecast, but the actual wave heights or the timing of peaks showed (usually) a delay relative to the data. </li></ul><ul><li>Inclusion of greater variability in the data used to train the model resulted in a significant improvement in its ability to reproduce larger peaks. </li></ul><ul><li>Once trained, the implementation of the model is extremely rapid. For real time data, in a pentium 4 processor with a 1 GB ram, it takes only 1 second to provide a forecast. </li></ul>

×