the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Multi-task Learning Long Short-term Memory Model to Emulate Wind Turbine Blade Dynamics
Abstract. The high computational costs in the dynamic analysis of wind turbines prohibit efficient design assessments and site-specific performance estimations. This study investigates the suitability of various dimensionality reduction techniques combined with a Long Short-term Memory (LSTM) algorithm to predict turbine responses, addressing computational challenges posed by high-dimensional inflow wind fields and complex time-stepping integration schemes. Feature selection criteria and a multi-stage modelling approach are implemented to arrive at a robust model configuration. Additionally, multi-task learning strategy is implemented which enables the LSTM model to predict multiple target variables simultaneously, eliminating the need for separate models for each target variable. Results demonstrate that this combined approach significantly reduces computational costs while maintaining consistent accuracy across all the target variables, thereby facilitating design feasibility studies and site-specific analyses of wind turbines.
- Preprint
(3944 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (extended)
-
RC1: 'Comment on wes-2024-105', Anonymous Referee #1, 06 Nov 2024
reply
The article presents a method of developing LSTM models for predicting blade response using multi-stage modelling and multi-task learning with dimensionality reduction techniques. Overall, the article is well-written, presenting a novel and relevant contribution in data driven methods. There are several minor aspects of the article which could be modified to improve its quality
- Article can be made more concise
- Consider shortening section 2 – not necessary to explain how TurbSim works, possible to reorganise by removing section 2 entirely and including the important parts as part of section 6
- Sufficient to simply state that IEA15MW reference turbine is used – Figure 2 and Table 1 not necessary
- In Section 1 or 5 – should include a justification on why LSTM (or RNNs in general) is used in this work, as opposed to conventional Neural Networks.
- Table 3 – Optimized value for Fully connected layer is at upper limit – why is range not expanded to ensure it is actually optimal?
- Table 5 – Between RMSE and signals shown in Figures 14 to 17, it is clear that LSTM model is accurate. However, RMSE may not necessarily be the best metric to measure model performance, as it does not take into account the variation of the signal (e.g. oop deflection has much larger magnitudes than ip deflection). Consider the use other metrics such as Variance Accounted For or Confidence Index.
- Figure 10 – Unclear on what this figure represents – some additional explanation in caption may be helpful
- Example LSTM predictions (Figures 9,12,14 to 17) show results at TI=11.54 – however, range from Table 2 shows that max TI should be 7.629.
- Example LSTM predictions (Figures 9,12,14 to 17) only show U = 13.65m/s upwards and TI=6.19 to 11.54 – may be interesting to see predictions at lower speeds or higher TI (if possible)
Citation: https://doi.org/10.5194/wes-2024-105-RC1 - Article can be made more concise
Model code and software
Model and Code Shubham Baisthakur https://doi.org/10.5281/zenodo.13305715
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
111 | 39 | 13 | 163 | 6 | 6 |
- HTML: 111
- PDF: 39
- XML: 13
- Total: 163
- BibTeX: 6
- EndNote: 6
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1