the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Modular deep learning approach for wind farm power forecasting and wake loss prediction
Abstract. Power production of offshore wind farms depends on many parameters and is significantly affected by wake losses. Due to the variability of wind power and its rapidly increasing share in the total energy mix, accurate forecasting of the power production of a wind farm becomes increasingly important. This paper presents a novel data-driven methodology to construct a fast and accurate wind farm power model. The deep learning model is not limited to steady-state situations, but captures also the influence of temporal wind dynamics and the farm power controller on the power production of the wind farm. With a multi-component pipeline, multiple weather forecasts of meteorological forecast providers are incorporated to generate farm power forecasts over multiple time horizons. Furthermore, in conjunction with a data-driven turbine power model, the wind farm model can be used also to predict the wake losses. The proposed methodology includes a quantification of the prediction uncertainty, which is important for trading and power control applications. A key advantage of the data-driven approach is the high prediction speed compared to physics-based methods, such that it can be employed for applications where faster than real-time power forecasting is required. It is shown that accuracy of the proposed power prediction model is better than for some baseline machine learning models. The methodology is demonstrated for two large real-world offshore wind farms located within the Belgian-Dutch wind farm cluster in the North Sea.
- Preprint
(2247 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on wes-2024-94', Anonymous Referee #1, 05 Sep 2024
In this work, the authors lay out a multi-part modeling framework for making estimates of real-time power output and future projections of the performance of a real farm. They construct a series of deep learning models to map from forecasted wind conditions to real farm-observed conditions, through to a fast, accurate, and uncertainty-imbued power estimate for the farm. The result is an exemplary application of machine learning techniques to a relevant wind engineering problem that is immediately useful to farm operators.
I have two major criticisms of the manuscript, primarily centered on exposition of the methods of the work. In the wind farm control model, some clarity could be added in consideration of the intended audience of WES, who are not AI/ML users. A more circumspect description of terms including but not limited to, "convolution branches", "feed-forward neural network", "dropout layers", and "dense layers", would be useful to allow the audience-- myself included-- view the machine learning aspects of the work on their merits rather than simply as a black box Additionally, the exposition of the weather forecasting methods is sparse and should be significantly expanded. Because it holds a key to the application of the work in this manuscript, it should be clear how it works and what is strengths and weaknesses are, especially in light of the results for lookahead forecasting.
I would also like to see more analysis of the results. The model clearly performs well to make estimates of the farm power in static wind conditions. The results in dynamic conditions are less clear to interpret, and additional text and clarification of the plots in Figures 21 and 22 would be appreciated. This seems to be a major benefit that can be conferred by this approach and thus more clarity around this area would be very valuable to the end result. Moreover, this would couple with better exposition of the forecasting efforts such that together they can be used to understand where and how this approach leads to errors in forecasted farm power.
I have a few additional small technical notes, which are addressed below by line number:
- 177: the heuristic sorting algorithm for turbine selection is slightly unclear and also possibly could be effected by flow heterogeneity, blockage, or terrain; please clarify the choice of this algorithm versus purely geographic sorting, etc.
- 216: the weather forecast data, like the weather forecast modeling, lacks clarity of what it contains and how it is used
- 239: be careful with the use of "instantaneous" response variables, they can often have significant inertial effects that should be justified
- ~370: the frequency of wind resource conditions is unclear, and this passes through to understanding how frequent high TI/high wind direction variance conditions occur in e.g. Figure 6; consider highlighting wind condition frequency in some way
- 416: I suggest quantifying the quality of the CI estimates: the value should be in the 68% CI explicitly 68% of the time (rather than "most of the time") and leave the 95% CI 5% of the time (i.e. "rarely") for a well posed confidence interval
Overall the paper is of high quality and in my opinion is an exemplary application of ML to a relevant problem to wind energy researchers and operators.
Citation: https://doi.org/10.5194/wes-2024-94-RC1 -
RC2: 'Comment on wes-2024-94', Anonymous Referee #2, 10 Nov 2024
This paper is highly detailed and well-structured, presenting comprehensive approach to wind farm power forecasting using modular deep learning models. The authors effectively utilize a robust data foundation, drawing from extensive SCADA data, to build modular machine learning models capable of predicting wind farm power also accounting for wake effects. The integration of individual turbine models into a wind farm model is a significant strength of this work, enabling a precise representation of wind farm dynamics. Additionally, the thorough analysis of various factors that influence wind farm power, such as wake effects, turbulence, and site-specific conditions, adds substantial value to the research and offers an insightful perspective on the complex influences impacting power forecasting.
While the methodology is technically sound, the high level of complexity could pose challenges for practical deployment and maintenance in real-world wind farm operations. A discussion on the practical implications of model implementation and maintenance, including possible strategies for overcoming these challenges, would be a valuable addition to enhance the model’s applicability for wind farm operators.
The paper would benefit from including a comparison with simpler forecasting models in terms of prediction accuracy. Demonstrating the added value of this complex approach over basic models would strengthen the justification for its use, especially for readers interested in balancing accuracy with model complexity in practical applications.
Line 91: The statement "The model proposed in this paper predicts the power of a complete wind farm as a whole" may lead to some ambiguity. Since the total power output of a wind farm is often predicted based on aggregate measurements, it would be more accurate to emphasize that this paper's contribution lies in the combination of individual turbine models with wake effects to produce an overall wind farm forecast.
Citation: https://doi.org/10.5194/wes-2024-94-RC2 -
RC3: 'Comment on wes-2024-94', Anonymous Referee #3, 13 Nov 2024
# General comments
This work proposes a new machine learning model for wind farm power forecasting. The model is able to predict the farm power and the wake losses at different time scales, by combining weather forecast services and deep neural networks with richer inputs. It is an interesting work proposing a new method to leverage both machine learning techniques and expert tools such as weather forecast services. Using Monte-Carlo dropout is a really nice touch, as it demonstrates a way of quantifying the uncertainties with ML approaches. I have 3 major criticisms: 1) the readability is not great, 2) too much information about the ML models are missing, and 3) the experiments do not really demonstrate the added value of the proposed methodology.
1) The paper could generally benefit from more graphics giving an overview of the whole model, showing the different elements and their dependencies. I think you developed quite an interesting architecture, and it would be really helpful to have a single picture summering the whole process. Then, it would become much easier for the reader to know to what submodule a paragraph is refereeing to. The Figure 3 tries to do so, but I find it not clear enough and too general. At the beginning of each Section, you can introduce the different submodules you're going to explain (it is missing in 2.2 for example), etc. There are some sub-Sections not at the right place: for example, train/test/validation data (2.1.5) is not part of the data sources, but is part of the experimental setting, the farm internal wake loss (2.2.2), is not part of the ML models, but it is a simple formula using the outputs of some ML models if I corrected understood, etc.
2) You have multiple "ML models" in your proposed approach. Sometimes you describe them, sometimes you don't. For example, the ML models for mapping weather forecasts (2.2.5) is interesting: adapting a weather forecast to a specific wind farm is a great idea, but you only give 2 lines of explanation on it. And you just describe it as an "ML-model": you need to give specific algorithms and models (is it a deep neural network, is it a linear regression, etc.). More generally, try to avoid generic expressions such as "ML models", but directly use the correct and specific type of algorithm. In 2.2.1, how do you go from you K branches to a feed-forward layer? You need to add more details to your explanations and figures. In the results Section, you need to give all the hyperparameters of your training (learning rate, etc.). You need to carefully describe each module of your approach and provide sufficient details to ensure your work can be reproduced.
3) In the results, I miss 2 important messages: how you justify (quantify) the novelties of your architecture, and how do you compare yourself with other works? For example: https://iopscience.iop.org/article/10.1088/1742-6596/2767/9/092014. In 3.1.2 you introduce 2 deep neural networks, but you should introduce them in the methodology section and justify why they are good baselines. More generally, I feel that the paper lacks of pedagogical approach. You propose a complex and interesting ML based model, but you need to justify each of your implementation choices. For example: you decide to start from a simple feed-forward network, and you show its limitations. Then you propose some improvements. You use a weather forecast model, and you show that it does not perform well on a specific wind farm, then you propose an ML approach to adapt it, etc. And this way, you build in a clear way your final model. In the results, it would be interesting to quantify the impact of each submodule: the impact of separating or not the inputs depending on their direct or indirect impact on the farm power, adapting weather forecast models or not, etc. You need to compare your approach with other baselines: other ML based methods, simpler versions of your solution, adapted and non-adapted weather forecasts, etc.
I have 2 bonus questions regarding the method used to compute upstream wind turbines. 1) As I understand, every turbine can be defined as an "upstream" turbine if it has some other turbines in its wake. Therefore, even if a turbine is in the middle of the farm, it can impact the general wind flow for more downstream machines. And I feel that considering only the first upstream machines could result in a loss of pertinent data, for some wind farm layout at least. Then, how do you justify considering only the first upstream turbines? 2) Why do you always consider a fix amount of upstream turbines? I think it is related to the constraints of the deep neural network's inputs, but maybe you could specify it.
# Specific comments
- Please always add a comma after the "e.g." and "i.e." expressions.
- Please define acronyms the first time you use it. For example, you never define "PC".
- Please improve the names of your variables: it is uncommon to have 2 letters for a same variable.
- Please use different notations when refereeing to time series and single time step values.
- Please use sub-Sections in the Introduction, having a single paragraph does not really help your readers.
- At multiple times you use the "faster than real-time power forecasting" expression. I don't really get what you mean by that. What can be faster than a "real-time" forecast?- Line 7: I would specify the order of magnitude of the "multiple time horizons" in the abstract.
- Line 8: what do you mean by "predict the wake losses"? Do you mean the wake losses percentage or more detailed wake losses info (like in a steady-state simulation)?
- Line 70: I would give some examples of different commercial weather forecast services, their differences and their advantages.
- Lines 131 to 138: I find this paragraph important but not clear enough. What about a table with the different data sources, showing the differences in a direct way (time resolution, accuracy, etc.).
- Line 154: "an additional data field is available that expresses", fix formulation.
- Line 182: please finish your sentence by adding a point at the end (and eventually commas at the end of each enumeration, except the last one).
- Line 219: this Section is about the methodology and not the actual data sources, it shouldn't be here.
- Line 246: do you mean "8 filters" instead of "8 kernels"?
- Line 299: this part is not clear, what is "an equivalent model for a single turbine"? Is it another deep neural network for a single turbine?
- Line 305: this part describes the turbine model, but it should come before the "farm internal wake loss". And the "farm internal wake loss" is not really an "ML model" as it is just a formula using the output of 2 ML models. For clarity, I wouldn't put it in the "ML models" sub-Section.
- Line 332: it is interesting to adapt weather forecast service to a specific wind farm using an ML-model. But what is the ML-model used, and why give only 2 lines of explanations for this?
- Line 341: the computing hardware and software is not part of the methodology but about the results / simulations / experiments.
- Line 344: I am not familiar with operations, but it seems to be a strong assessment. It is enough to declare that your model can be used by wind farm operators?
- Line 254: why cannot you give precise numbers for the quantities of turbines?
- Line 399: you introduce 2 new deep neural networks quite fast here. It should be introduced in the methodology, as 2 baselines. And you need to give more details about their architecture and how they differ from your model.Citation: https://doi.org/10.5194/wes-2024-94-RC3 -
AC1: 'Author comments to RC1, RC2 and RC3', Stijn Ally, 11 Dec 2024
We would like to thank all reviewers for their thorough analysis of our manuscript and their constructive comments. We have carefully considered each comment and provide below our answers. We will revise our manuscript accordingly and believe that the revision will address all reviewers’ concerns. We are looking forward to hearing your feedback.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
286 | 65 | 47 | 398 | 9 | 10 |
- HTML: 286
- PDF: 65
- XML: 47
- Total: 398
- BibTeX: 9
- EndNote: 10
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1