Applying a qualifying metric for solar power forecasts to assess their capability to predict the ramp events, especially by the combined forecasts, then those forecasts can be implemented for: Managing high ramp-rates of PV solar power generation; Optimal energy management of energy storage systems; Voltage regulator settings on feeders with PV distributed generation.
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
Qualifying combined solar power forecasts in ramp events' perspective
1. Mohamed Abuella, Student Member, IEEE, Badrul Chowdhury, Senior Member, IEEE
Department of Electrical and Computer Engineering
Energy Production & Infrastructure Centre (EPIC)
University of North Carolina at Charlotte, Charlotte, NC 28223-0001
Poster Qualifying Combined Solar Power Forecasts in Ramp Events' Perspective
1. Objectives
6. Conclusion
Applying a qualifying metric for solar power forecasts to assess their
capability to predict the ramp events, especially by the combined
forecasts, then those forecasts can be implemented for:
• Managing high ramp-rates of PV solar power generation;
• Optimal energy management of energy storage systems;
• Voltage regulator settings on feeders with PV distributed generation.
5. Case Study
These are the most common metrics that are also used for our case study, the root mean square error
(RMSE), mean bias error (MBE), and Skill Score (ss) as common metrics are chosen to evaluate the
forecasts. In addition, a proposed metric which depends on ramp rates of solar power is explained below:
4. Most Common Evaluation Metrics
The solar PV power system is located in Australia. The weather forecasts data and the measured solar power
data were recorded from April 2012 to May 2014. The weather forecasts from European Center for Medium-
Range Weather Forecasts (ECMWF), which is a global numerical weather prediction (NWP) model.
The test part of the data contains 12 months - from May 2013 to June 2014.
2. Definition of Solar Power Ramp Rates
Acknowledgement
The authors would like to acknowledge the support of Energy Production and Infrastructure Centre (EPIC) at
UNC Charlotte.
To evaluate the ramp events of the forecasts, the ramp rate of each hour is calculated as in (1),
and the RMSERR of the ramp rates is calculated below:
Fig. 5: The Combined forecasts improvement over the different models
where P(t) is the solar power of the target hour, it can also be its forecast
F(t); D is the time duration for which the ramp rate is determined.
Fig. 1: (a) Solar power forecasts and (b) their ramp rates for 2 days
Fig.4: (a) 24-hours-ahead forecasting and (b) combining methodology schemes
of the hour-ahead forecasts on May 31st day
3. Evaluation Metrics
Table I
Evolution Metrics of Renewable Energy Forecasting
(a)
(b)
(p.u.)(p.u./hr)
5.1. Data Description
This study qualifies the solar power combined forecasts in terms of solar power ramp events.
Several forecasting models of solar power are implemented to generate various forecasts, and then those
forecasts are combined by ensemble learning method, random forests (RF).
5.3. Results and Evaluation
Table II
RMSE of Different Models and Combined Forecasts
• The evaluation metrics of solar power forecasting should be chosen based on the application
of the forecasts.
• However, the common evaluation metrics for the combined forecasts indicate that the
combined forecast has the best performance. Although the combined forecasts are better
than the individual forecasts, it does not necessarily mean that they are also the best for
capturing the ramp events, this can be identified by using the proposed metric, the root
mean square error of ramp rates (RMSERR).
• In terms of ramp event forecasting, the combined forecasts are not the most efficient option
for applications that rely on highly accurate ramp event forecasts.
• In addition, the RMSERR reveals room for improvement of the ramp event forecasting using
the combined forecast, which may be considered as further work of this study.
54
39
32
21
17
10
8
66 5 5 4 3 2 1
0
20
40
Error Measure
Fig. 2: Evaluation metrics in Renewable Energy Forecasting [Ref.1]
RMSE = Root Mean Square Error
MAE = Mean Absolute Error
MBE = Mean Bias Error
MAPE = Mean Absolute Percentage Error
SS = Skill Score
MSE = Mean Square Error
STD = Standard Deviation
R2 = Coefficient of Determination
MPE = Mean Percentage Error
CC = Correlation Coefficient
ME = Mean Error
MaxAE = Maxium Absolute Error
MeAPE = Median of the Absolute Percentage Error
MASE = Mean Absolute Scaled Error
where F is the solar power forecast and P is the observed value of the solar power. F and P are
normalized to the nominal installed capacity of the solar power system, n is the number of hours, which
can be day-hours or month-hours.
The improvement or the skill score (SS)
metric is to compare a method with respect to
other benchmark methods
where RRP and RRF are ramp rates of the observed solar power and the forecasts respectively
𝑅𝑀𝑆𝐸 𝑅𝑅 =
1
𝑛
𝑖=1
𝑛
𝑅𝑅 𝑃𝑖 − 𝑅𝑅 𝐹𝑖
2 (6)
𝑆𝑘𝑖𝑙𝑙 𝑆𝑐𝑜𝑟𝑒 % = 1 −
𝑅𝑀𝑆𝐸 𝑚𝑒𝑡ℎ𝑜𝑑
𝑅𝑀𝑆𝐸 𝑏𝑒𝑐ℎ𝑚𝑎𝑟𝑘
∗ 100 (5)
𝑀𝐵𝐸 =
1
𝑛
𝑖=1
𝑛
𝑃𝑖 − 𝐹𝑖 (3)𝑅𝑀𝑆𝐸 =
1
𝑛
𝑖=1
𝑛
𝑃𝑖 − 𝐹𝑖
2 (2) 𝑀𝐴𝐸 =
1
𝑛
𝑖=1
𝑛
𝑃𝑖 − 𝐹𝑖 (4)
𝑅𝑎𝑚𝑝 𝑅𝑎𝑡𝑒 𝑡 =
𝑑𝑃 𝑡
𝑑𝑡
=
𝑃 𝑡 + 𝐷 − 𝑃 𝑡
𝐷
(1)
5.2. The Methodology
(7)𝑃𝑒𝑟𝑠𝑖𝑠𝑡𝑒𝑛𝑐𝑒 𝑀𝑜𝑑𝑒𝑙 ⇒ 𝐹 𝑡 = 𝑃 𝑡 − 1
The first stage is to produce the 24-hour-ahead forecasts from the NWP data and the observed solar power,
as shown in Fig. 4.a. The second stage, shown in Fig. 4.b is where the ensemble learning (i.e., the random
forest) combines the available weather data and the previous forecasts from the first stage, all blended
together to with the persistence model (i.e. an hour lagged solar power observations) for achieving better
hour-ahead forecasts rather than day-ahead forecasts.
Month
RMSEs
Persistence MLR ANN SVR Average Ensemble
June 0.1136 0.0745 0.0680 0.0726 0.0622 0.0621
July 0.1189 0.0926 0.0865 0.0831 0.0809 0.0820
August 0.1306 0.0864 0.0811 0.0793 0.0758 0.0720
September 0.1298 0.0738 0.0724 0.0776 0.0730 0.0693
October 0.1280 0.0723 0.0670 0.0648 0.0652 0.0589
November 0.1267 0.0793 0.0665 0.0679 0.0665 0.0609
December 0.1168 0.0618 0.0542 0.0604 0.0556 0.0500
January 0.1155 0.0705 0.0526 0.0552 0.0525 0.0470
February 0.1150 0.0874 0.0704 0.0749 0.0670 0.0628
March 0.1229 0.0855 0.0805 0.0832 0.0786 0.0766
April 0.1138 0.0748 0.0637 0.0648 0.0642 0.0605
May 0.1189 0.0571 0.0545 0.0566 0.0588 0.0513
Average RMSEs 0.1209 0.0763 0.0681 0.0700 0.0667 0.0628
Month
The Biases or MBEs of Different Models
Persistence MLR ANN SVR Average Ensemble
June 0.0106 -4.5496 -4.4558 -2.0429 -2.7594 -0.1668
July 0.0272 -15.6600 -9.7906 -14.1870 -9.9026 -9.3774
August 0.1474 -1.0525 -2.3990 -7.1867 -2.6227 -0.7970
September 0.1449 -0.0871 1.5521 -2.3912 -0.1953 2.0591
October 0.0651 3.1621 -4.7463 -5.0227 -1.6354 0.4065
November 0.1008 9.0623 0.8405 -0.6515 2.3380 3.8489
December 0.0435 6.9379 -0.2760 -5.1266 0.3947 0.2941
January 0.0590 -5.8772 -4.1514 -3.8977 -3.4668 -2.7234
February 0.0289 -1.6228 -2.8414 -4.3724 -2.2019 -0.9419
March 0.0924 0.8454 3.5505 -0.9198 0.8921 5.2160
April 0.1601 5.0138 -1.6179 -4.1976 -0.1604 0.1270
May 0.0272 2.0297 2.1124 -1.4953 0.6685 2.6174
Average 0.076 -0.150 -1.852 -4.291 -1.554 0.047
Table III
MBE of Different Models and Combined Forecasts
-5%
0%
5%
10%
15%
20%
25%
30%
35%
40%
45%
50%
55%
60%
65%
Improvement(%)
Improvement of Combined Forecasts over Other Models
Persistence 48%
MLR 18%
ANN 8%
SVR 12%
Simple Avgerage 6%
Table III
RMSE and RMSERR of Different Forecasts over the Entire Year
To evaluate the ramp events capability of the combined forecasts, we implemented the evolution
metric RMSERR as in (6):
Therefore, it is obvious that in terms of ramp events, the combined forecasts are not the best,
although in terms of RMSE the combined forecasts are found the most accurate.
SVR
Persistence
Ensemble
Learning (RF)
for Combining
the Models
Combined
Forecasts
Individual
Forecasting
Models
MLR
ANN
Fig. 3: Diagram of combining the different forecasts
𝑓𝑅𝐹 =
1
𝐵
𝑏=1
𝐵
𝑇𝑏 𝑥 (8) Since 𝑓𝑅𝐹 is the output of
the random forest (RF)
Day-ahead
forecasts
Hour Day Month
00:00-23:00
:
00:00-23:00
1
:
30
June
00:00-23:00
:
00:00-23:00
1
:
31
July
:
:
:
:
:
:
00:00-23:00
:
00:00-23:00
1
:
30
May
00:00 AM
01:00 AM
:
23:00 PM
31 May
(a) (b)
Training
Set
(364 days)
at 00:00 AM
PV Power
Past solar
power
observations
Forecasts
(model’s
outcomes)
Weather Data
Past weather
forecasts,
including:
solar irradiance,
cloud cover,
temperature,
wind speed,
humidity,
precipitation, etc.
Future weather
forecasts
Weather Data
Past weather
forecasts,
including:
solar irradiance,
cloud cover,
temperature,
wind speed,
humidity,
precipitation,
etc.
Future weather
forecasts
Models’ Outcomes
Past models’ outcomes,
including:
Day-ahead:
MLR, ANN, SVR
Hour-ahead:
Persistence
Future models’
outcomes
PV Power
Past solar
power
observations
Hourly
combined
forecasts
[Ref.1] R. Ulbricht, A. Thoß, H. Donker, G. Grafe, and W. Lehner, “Dealing with Uncertainty: An Empirical Study on
the Relevance of Renewable Energy Forecasting Methods,” in In Data Analysis for Renewable Energy Integration,
Springer, 2016
Method Persistence MLR ANN SVR
Simple
Average
Ensemble
(Combined)
RMSE 0.1209 0.0763 0.0681 0.0700 0.0667 0.0628
RMSERR 0.1383 0.0771 0.0722 0.0747 0.0796 0.0750