Comparison Metrics Microscale Simulation Challenge for Wind Resource Assessment
Abstract. The main goals of a wind resource assessment (WRA) at a given site are to estimate the wind speed and annual energy production (AEP) of the planned wind turbines. Several steps are involved in going from initial wind speed estimations of specific locations to a comprehensive full-scale AEP assessment. These steps differ significantly between the chosen tool and the individuals performing the examination. The goal of this work is to compare different WRA simulation tools at the Perdigão site in Portugal, for which a large amount of wind measurement data is available, in terms of both accuracy and costs. Results from nine different simulations from five different modellers were obtained via the "IEA Wind Task 31 Comparison metrics simulation challenge for wind resource assessment in complex terrain", consisting of a range of linear models, Reynolds-Averaged Navier-Stokes (RANS) computational fluid dynamics models and Large Eddy Simulations (LES). The wind speed and AEP prediction errors for three different met mast positions across the site were investigated and further translated into relative “skill” and “cost” scores, using a method previously developed by the authors. This allowed the most optimal simulation tool in terms of accuracy and cost to be chosen for this site. It was found that the RANS simulations achieved very high prediction accuracy at relatively low costs for both wind speed and AEP estimations. The LES simulations achieved great wind speed prediction for certain conditions, but at a much higher cost, which in turn also reduced the number of possible simulations, leading to a decrease in AEP prediction accuracy. For some of the simulations, the forest canopy was explicitly modelled, which was proven to be beneficial for wind speed predictions at lower heights above the ground, but lead to under-estimations of wind speeds at upper heights, decreasing the AEP prediction accuracy. Lastly, low correlation qualities between wind speed and AEP prediction error were found for each position, showing that accurate wind modelling is not necessarily the only important variable in the WRA process, and that all the steps must be considered.
Florian Hammer et al.
Status: final response (author comments only)
- RC1: 'Comment on wes-2022-114', Rogier Floors, 24 Feb 2023
- RC2: 'Comment on wes-2022-114', Anonymous Referee #2, 28 Feb 2023
Florian Hammer et al.
Florian Hammer et al.
Viewed (geographical distribution)
The paper presents a case study of different models for the Perdigao campaign. It is a noble goal to quantify the accuracy of a model and the resources used to run that model, but unfortunately for me to trust the conclusions provided in the paper I lack important details about both the model setups and the way the 'costs' are calculated. I think the paper should stay clear from drawing too general conclusions about what model is most 'promising' and instead only present the very specific cases for which the models are validated (e.g. from mast to mast). So in it's current form I cannot recommend the paper for publication. I think major revisions are required to rethink the structure and/or allow other researchers to reproduce the results. I am afraid that with so many models it will be very hard to describe all of the setups, without making the paper 50 pages long. Perhaps a possibility is to put detailed model setups in the appendix. Alternatively, the text should be adapted so that the model setups are provided with enough level of detail to redo the simulations (see detailed comments below). I also think that In a study like this where no new theory is being presented (which is fine) it is particularly important that the data are openly available so that others can still benefit from the study. So I would suggest expanding the "Data availability." section, with more than just the repository of all Perdigao data.
l16: upper heights sounds a bit strange. I suggest higher heights or something similar.
l26: long-term wind resource extrapolation: I would call this long-term wind resource correction, the way it is written it seems like the long term wind resource needs extrapolation, but is the shorter term measurements that need to be extrapolated to a longer term climate.
l94: Will this turbine cause any wake effects? This is not discussed.
l149: Are all model outputting time series? Is this generic or which time series are you talking about here?
l160: topography -> orography (topography is usually defined to include the roughness of the terrain)
Table 1: Common application range: I would rather call this complex or non-complex instead of flat or non-flat. The linearized model will probably work fine in non-flat terrain as long as no flow seperation occurs.
l177: Who is 'our' here?
l187: terrain topography -> orography
l190: Corine Land Cover (European Union, 2018) database: Did you use raster or vector data? Which projection was used? Which datum?
l190: "The stem size and distribution reproduce the same canopy frontal solidity (Monti et al., 2019; Nepf,
190 2012) of the actual vegetation at the site extracted from the Corine Land Cover (European Union, 2018) database".
This is not clear: what is the actual vegetation at the site? How can you get that from CORINE data which is just a satellite based product?
How can that match the stem size and distribution?
PCE-LES: which source did you use for the terrain elevation?
Section 2.3: after reading this I was expecting all modellers use the same terrain elevation, but based on l198 I start to doubt that because there SRTM is mentioned. In the LES section no source is mentioned.
l215: Scaled how? You mean assuming the wind distribution for the three months is representative for the whole year?
l218: That reference does not really describe the WAsP stability model. Better to cite Troen and Petersen (1989). What stability setting are used in the end?
l219: What was the source of this tiff data?
l220: What is in the .map file? What was the source?
l221: There is several roughness tables in that reference? Which one was used?
l223: The direction variable: you mean wind direction? From which height? What is NCA?
l225: Which 'data'?
l225: References for MERRA and ERA5 missing
l226: Coefficient of determination between what and what?
l226: How do you define "very similar"?
l226: "some basic filtering", "constant line values", "some variables": specify what filtering, what is a constant line value, which variables?
l229: Section on stability: this can also be made quantitative.
l229: What kind of adjustment?
l230: What was optimized with respect to what?
l232: Please specify in more detail what kind of long-term correction you did.
l246: what was the upper domain boundary?
l246: Is the first bin centered around north or from 0 to 15 degrees?
l247: How do you define the wind shear?
l255: Similar as my previous comment: so then you assume the 3 month wind measurements are representative of the full year? Thath is fine, but it is inconsistent with the previous model setup (Windpro), where you apply a long term correction.
l261: roughness height -> roughness length
l265: Is the roughness length varying with wind direction sector?
l267: grid independence study -> I assume the conclusion of the grid independence study was that the simulations were not dependent on resolution? Why was the resolution of 15 m optimal? In which sense was it optimal?
l285-l289: Taking the mean of a RMSE is mixing different errors metrics. You should calculate the squared errors from each sector and do the root-mean in the last step?
l292: m/s should be in normal font
l302: it would be good to mention here that this is the calibration point.
l318: This is not that surprising because both masts are located on top of a hill. It would be useful to relate this to the "most similar predictor" discussion in https://wes.copernicus.org/articles/5/1679/2020/.
Fig 4: I am bit confused how big the errors are at 80 m. Wasn't that used for calibration? How can there be already RMSE of up to 0.4 m/s?
Sect 3.1.2: This section is hard to understand ; what is the main message? The relative costs and skill score appear suddenly in Fig 7, but it feels like some background on the numbers should be available (appendix?). As discussed the costs and skill are extremely hard to quantify, so you could end up with any ranking of the models here. I would avoid drawing conclusions like "Taking the cost scores into account, E-Wind is the most suitable tool for the Perdigão site.".
l370: mast 19? You mean 29?
l380: end of line: height.
Fig 8: Are we comparing AEP at the same heights here? That should be added somewhere.
l386: What is an AEP by sector? I only know about an AEP as the production for a year, i.e. for all sectors combined.
l423: It is for me again not quite clear how this is quantified. I would leave out generalizations like this and just discuss model differences. How does a single AEP prediction from one mast to the other make this the model the best for entire Perdigao site?
Fig 11: Where is mast 20?
Sect 3.3: I agree there is so many differences in the difference model chains to calculate AEP that it is impossible to say what it is the exact reason. If mast 29 is used for calibration one would not expect any model error in AEP? So I would just leave this section out.
l454: It would be a very surprising conclusions if the AEP error did not depend on wind speed error. What about air density? How has that been calculated in the different model chains?
Troen, I., & Lundtang Petersen, E. (1989). European Wind Atlas. Risø National Laboratory.