Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

BDE SC3.3 Workshop - Wind Farm Monitoring and advanced analytics


Published on

Wind Farm Monitoring and advanced analytics (Mr. Peter Clive, WoodGroup) at the BigDataEurope Workshop, Amsterdam, Novermber 2017

Published in: Technology
  • Be the first to comment

  • Be the first to like this

BDE SC3.3 Workshop - Wind Farm Monitoring and advanced analytics

  1. 1. Response deficit analysis in wind farm performance monitoring Prof Dr Peter J M Clive Wednesday, 28 November 2017
  2. 2. • SCADA Time series data – Statistics such as means and variances acquired over a succession of contiguous averaging intervals, e.g. 10 minute averages of wind speed, active power export, etc. • SCADA Event data – Instances of specific events recorded with details including detection and reset times, duration, event code, and the values of key parameters, e.g. alarm data • SCADA Cumulative data – Running totals of key quantities such as production, downtime, time in service, etc. • CMS data – High frequency data for signal processing and comparison with set points Different kinds of data 7
  3. 3. Different kinds of data • Data from individual wind turbines – SCADA, CMS • Sub-station data • Point-of-sale meter data • On site met mast data – Permanent met mast – Power performance assessment reference mast • Remote sensing data – Nacelle mounted Lidar – Wind profilers (Lidar, Sodar) – Scanning Lidars Understand the output in terms of production and status information Understand the incident wind resource to which the wind turbines are responding
  4. 4. Different kinds of data • Condition monitoring – Acquisition of high frequency CMS signals – Sensors installed on drive train components – Accelerometers, strain gauges, oil particulate counters, temperature sensors, etc. – Signal processing, set points and thresholds • Performance monitoring – Uses routine operational SCADA data – Accumulation of statistics – Trends and anomalies detected – Integration of time series and event data – Robust with low incidence of false positives
  5. 5. Case studies Response Deficit Analysis of SCADA data • The plots illustrating the variation of one parameter (e.g. active power) in response to variations in another (e.g. wind speed or bearing temperature) cannot be individually inspected cost-effectively; • Response Deficit Analysis enables the statistical characterization of these response curves so that a “graph of graphs” can be produced that an analyst can interpret instantly to identify deviant behavior in a timely focused way that optimally leverages their experience and expertise.
  6. 6. 1. Select two data tags that can be paired. For example: • 10-minute average hub height wind speed and • Concurrent 10-minute average active power 2. This allows the observed power curve to be compared to a reference power curve 3. N.B. the same technique can be applied to any relationship, such as • RPM v. Pitch Angle, • Drive-end v. Non-drive-end bearing temperature, 4. The data tag values exhibit a relationship (for example: the power curve). One value varies in response to variations in the other. Response Deficit Analysis (RDA)
  7. 7. 5. Select a reference response. This could be representative, typical, warranted, depending on why you are undertaking RDA. For example: • The warranted power curve • The long term average observed power curve • Power curve observed on average over a number of turbines during the short term period under investigation • Some other reference considered typical or representative Response Deficit Analysis (RDA)
  8. 8. 6. Observe measured responses in groups of paired tags (for example: grouped by turbine and period of time, generating a measured power curve for each turbine for the period in question). 7. Subtract the measured responses from the reference response: these are the response deficits (for example: subtract the reference from the measured power curve). 8. Chose metric generators. These are functions whose value can be weighted by the response deficit (for example: in the case of a power curve, these could be functions of wind speed). Response Deficit Analysis (RDA)
  9. 9. Response deficit
  10. 10. Response deficit
  11. 11. Metric generators
  12. 12. Metric generators
  13. 13. 9. Calculate the "performance" or "response" metrics. • These are the average values of the metric generator functions weighted by the response deficit. • Calculate at least two. • These can then be plotted against each other to characterise the response relative to the reference for the group of paired tags. • This provides a “graph of graphs” where each point represents one instance of the response under investigation. • Anomalous responses are immediately obvious. Response Deficit Analysis (RDA)
  14. 14. 10. Normalise the metrics by a common "normalisation" metric generator. • Raise the normalisation metric to the order of each metric divided by the order of the normalisation metric • For example: • Metric generator 1 is a 3rd order polynomial proportional to the skewness of the response deficit, • Metric generator 2 is a 4th order polynomial proportional to the kurtosis of the response deficit • Divide generator 1 by a 2nd order normalisation generator (proportional to the variance of the response deficit) raised to the power 3/2 and • Divide generator 2 by the same normalisation generator raised to the power 2 (=4/2). Response Deficit Analysis (RDA)
  15. 15. 11.The results of applying metric generators provides response deficit metrics that can be plotted to visualise the data, creating a graph of graphs. 12.For example the metric obtained using generator 2 can be plotted against the metric obtained using generator 1 in Step 10 above. An example is shown in the next slide. Response Deficit Analysis (RDA)
  16. 16. 19%of AEP RDA metric plot Response Deficit Analysis Inspection of performance metrics enables rapid identification of anomalous performance in seconds or minutes Anomalies Main sequence (Each point represents one turbine’s performance during one week) Response Deficit Analysis (RDA)
  17. 17. Case studies
  18. 18. Case studies Response Deficit Analysis immediately identifies which wind turbine during which periods have exhibited power performance anomalies
  19. 19. Case studies
  20. 20. Case studies Sever underperformance that had gone un-noticed for months was instantly detected using Response Deficit Analysis once SgurrTrend services were engaged. A controller fault due to an incorrect set point was causing production losses of nearly 20%.
  21. 21. Case studies Yield deficit analysis
  22. 22. Case studies Yield deficit analysis Tower vibration occurs at a specific wind speed and hence rotor rpm: rotor imbalance indicated, probably due to poor pitch regulation in high shear, incurring downtime and losses in production of around 1%, and contributing to premature gearbox failure through high torque variance
  23. 23. Case studies
  24. 24. Case studies Pitch misalignment is immediately identified using SgurrTrend. The impact of this fault is a reduction of 10% in annual energy production (AEP)
  25. 25. Case studies Turbine 1 Turbine 2
  26. 26. Case studies Turbine 1 Turbine 2 Wind turbine inter-comparison reveals anomalous or delinquent performance: in this case a delayed cut-in costing 1% in production of the affected turbine, WTG01, losing >15 MWh per month per turbine as a result
  27. 27. Case studies
  28. 28. Case studies A controller fault is immediately identified using Response Deficit Analysis: a premature cut-out is costing 1% of AEP. This is corrected by the installation of appropriate firmware and controller settings.
  29. 29. 1st generation: extrapolation • Mast mounted sensors and remote sensing vertical profilers 2nd generation: inference • Inference of wind conditions from measurements in multiple location using scanning devices 3rd generation: direct observation • Wind parameters of interest are all directly observed within the entire domain of interest • Measurement is intuitive: all that is required to interpret the measurement is knowledge of its purpose rather than instrument- specific expertise • Example: multiple synchronised lidars fulfil at least some of the requirements of a 3rd generation system Towards 3rd generation sensors
  30. 30. The IEA Wind Energy Task 32 is adopting a "use case" framework for describing the application of lidar in wind energy assessments to ensure well-documented measurement techniques applied in a manner that is fit-for-purpose with the degree of consistency required for investor confidence A use case considers three things • Data requirements: articulated without reference to the capabilities of the possible methods that are available to fulfil them. • Measurement method: there are multiple options available whose suitability depends upon the data requirements that are being fulfilled. • Situation: the performance of a particular method may depend upon the circumstances in which it is deployed. IEA Use Cases
  31. 31. Clifton, A. et al., IEA Wind Energy Task 32 Remote Sensing of Complex Flows by Doppler Wind Lidar: Issues and Preliminary Recommendations, NREL, 2015 Measurement method Data acquisition situation Data requirements IEA Task 32 Lidar Use Cases
  32. 32. What measurement accuracy is verified in this situation? What data requirements arise in this situation? What measurement method fulfils my data requirements? IEA Task 32 Lidar Use Cases
  33. 33. Pre-construction OEMs’ FEM and aero-elastic models
  34. 34. Post-construction WTG with SCADA, CMS, etc.
  35. 35. Conclusions • Response Deficit Analysis is a general technique that can be applied to any data in which relationships between variables occur which can be compared to a reference. • The difference between the observed and reference relationships is the deficit • Generate metrics from this deficit using functions in a similar way to calculating statistical moments • These metrics can be plotted against each other to produce a “graph of graphs” amenable to rapid inspection • Anomalous performance is made immediately obvious
  36. 36. Questions Thank You.