the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Comparison Metrics Microscale Simulation Challenge for Wind Resource Assessment
Abstract. The main goals of a wind resource assessment (WRA) at a given site are to estimate the wind speed and annual energy production (AEP) of the planned wind turbines. Several steps are involved in going from initial wind speed estimations of specific locations to a comprehensive full-scale AEP assessment. These steps differ significantly between the chosen tool and the individuals performing the examination. The goal of this work is to compare different WRA simulation tools at the Perdigão site in Portugal, for which a large amount of wind measurement data is available, in terms of both accuracy and costs. Results from nine different simulations from five different modellers were obtained via the "IEA Wind Task 31 Comparison metrics simulation challenge for wind resource assessment in complex terrain", consisting of a range of linear models, Reynolds-Averaged Navier-Stokes (RANS) computational fluid dynamics models and Large Eddy Simulations (LES). The wind speed and AEP prediction errors for three different met mast positions across the site were investigated and further translated into relative “skill” and “cost” scores, using a method previously developed by the authors. This allowed the most optimal simulation tool in terms of accuracy and cost to be chosen for this site. It was found that the RANS simulations achieved very high prediction accuracy at relatively low costs for both wind speed and AEP estimations. The LES simulations achieved great wind speed prediction for certain conditions, but at a much higher cost, which in turn also reduced the number of possible simulations, leading to a decrease in AEP prediction accuracy. For some of the simulations, the forest canopy was explicitly modelled, which was proven to be beneficial for wind speed predictions at lower heights above the ground, but lead to under-estimations of wind speeds at upper heights, decreasing the AEP prediction accuracy. Lastly, low correlation qualities between wind speed and AEP prediction error were found for each position, showing that accurate wind modelling is not necessarily the only important variable in the WRA process, and that all the steps must be considered.
This preprint has been withdrawn.
-
Withdrawal notice
This preprint has been withdrawn.
-
Preprint
(14182 KB)
Interactive discussion
Status: closed
-
RC1: 'Comment on wes-2022-114', Rogier Floors, 24 Feb 2023
The paper presents a case study of different models for the Perdigao campaign. It is a noble goal to quantify the accuracy of a model and the resources used to run that model, but unfortunately for me to trust the conclusions provided in the paper I lack important details about both the model setups and the way the 'costs' are calculated. I think the paper should stay clear from drawing too general conclusions about what model is most 'promising' and instead only present the very specific cases for which the models are validated (e.g. from mast to mast). So in it's current form I cannot recommend the paper for publication. I think major revisions are required to rethink the structure and/or allow other researchers to reproduce the results. I am afraid that with so many models it will be very hard to describe all of the setups, without making the paper 50 pages long. Perhaps a possibility is to put detailed model setups in the appendix. Alternatively, the text should be adapted so that the model setups are provided with enough level of detail to redo the simulations (see detailed comments below). I also think that In a study like this where no new theory is being presented (which is fine) it is particularly important that the data are openly available so that others can still benefit from the study. So I would suggest expanding the "Data availability." section, with more than just the repository of all Perdigao data.
l16: upper heights sounds a bit strange. I suggest higher heights or something similar.
l26: long-term wind resource extrapolation: I would call this long-term wind resource correction, the way it is written it seems like the long term wind resource needs extrapolation, but is the shorter term measurements that need to be extrapolated to a longer term climate.
l94: Will this turbine cause any wake effects? This is not discussed.
l149: Are all model outputting time series? Is this generic or which time series are you talking about here?
l160: topography -> orography (topography is usually defined to include the roughness of the terrain)
Table 1: Common application range: I would rather call this complex or non-complex instead of flat or non-flat. The linearized model will probably work fine in non-flat terrain as long as no flow seperation occurs.
l177: Who is 'our' here?
l187: terrain topography -> orography
l190: Corine Land Cover (European Union, 2018) database: Did you use raster or vector data? Which projection was used? Which datum?
l190: "The stem size and distribution reproduce the same canopy frontal solidity (Monti et al., 2019; Nepf,
190 2012) of the actual vegetation at the site extracted from the Corine Land Cover (European Union, 2018) database".
This is not clear: what is the actual vegetation at the site? How can you get that from CORINE data which is just a satellite based product?
How can that match the stem size and distribution?
PCE-LES: which source did you use for the terrain elevation?
Section 2.3: after reading this I was expecting all modellers use the same terrain elevation, but based on l198 I start to doubt that because there SRTM is mentioned. In the LES section no source is mentioned.
l215: Scaled how? You mean assuming the wind distribution for the three months is representative for the whole year?
l218: That reference does not really describe the WAsP stability model. Better to cite Troen and Petersen (1989). What stability setting are used in the end?
l219: What was the source of this tiff data?
l220: What is in the .map file? What was the source?
l221: There is several roughness tables in that reference? Which one was used?
l223: The direction variable: you mean wind direction? From which height? What is NCA?
l225: Which 'data'?
l225: References for MERRA and ERA5 missing
l226: Coefficient of determination between what and what?
l226: How do you define "very similar"?
l226: "some basic filtering", "constant line values", "some variables": specify what filtering, what is a constant line value, which variables?
l229: Section on stability: this can also be made quantitative.
l229: What kind of adjustment?
l230: What was optimized with respect to what?
l232: Please specify in more detail what kind of long-term correction you did.
l246: what was the upper domain boundary?
l246: Is the first bin centered around north or from 0 to 15 degrees?
l247: How do you define the wind shear?
l255: Similar as my previous comment: so then you assume the 3 month wind measurements are representative of the full year? Thath is fine, but it is inconsistent with the previous model setup (Windpro), where you apply a long term correction.
l261: roughness height -> roughness length
l265: Is the roughness length varying with wind direction sector?
l267: grid independence study -> I assume the conclusion of the grid independence study was that the simulations were not dependent on resolution? Why was the resolution of 15 m optimal? In which sense was it optimal?
l285-l289: Taking the mean of a RMSE is mixing different errors metrics. You should calculate the squared errors from each sector and do the root-mean in the last step?
l292: m/s should be in normal font
l302: it would be good to mention here that this is the calibration point.
l318: This is not that surprising because both masts are located on top of a hill. It would be useful to relate this to the "most similar predictor" discussion in https://wes.copernicus.org/articles/5/1679/2020/.
Fig 4: I am bit confused how big the errors are at 80 m. Wasn't that used for calibration? How can there be already RMSE of up to 0.4 m/s?
Sect 3.1.2: This section is hard to understand ; what is the main message? The relative costs and skill score appear suddenly in Fig 7, but it feels like some background on the numbers should be available (appendix?). As discussed the costs and skill are extremely hard to quantify, so you could end up with any ranking of the models here. I would avoid drawing conclusions like "Taking the cost scores into account, E-Wind is the most suitable tool for the Perdigão site.".
l370: mast 19? You mean 29?
l380: end of line: height.
Fig 8: Are we comparing AEP at the same heights here? That should be added somewhere.
l386: What is an AEP by sector? I only know about an AEP as the production for a year, i.e. for all sectors combined.
l423: It is for me again not quite clear how this is quantified. I would leave out generalizations like this and just discuss model differences. How does a single AEP prediction from one mast to the other make this the model the best for entire Perdigao site?
Fig 11: Where is mast 20?
Sect 3.3: I agree there is so many differences in the difference model chains to calculate AEP that it is impossible to say what it is the exact reason. If mast 29 is used for calibration one would not expect any model error in AEP? So I would just leave this section out.
l454: It would be a very surprising conclusions if the AEP error did not depend on wind speed error. What about air density? How has that been calculated in the different model chains?References:
Troen, I., & Lundtang Petersen, E. (1989). European Wind Atlas. Risø National Laboratory.Citation: https://doi.org/10.5194/wes-2022-114-RC1 -
AC1: 'Reply on RC1', Florian Hammer, 01 Jul 2023
The comment was uploaded in the form of a supplement: https://wes.copernicus.org/preprints/wes-2022-114/wes-2022-114-AC1-supplement.pdf
-
AC1: 'Reply on RC1', Florian Hammer, 01 Jul 2023
-
RC2: 'Comment on wes-2022-114', Anonymous Referee #2, 28 Feb 2023
This work describes a comparison of different wind flow models’ accuracy to extrapolate the wind speed, and its associated Annual Energy Production, from one met mast to another location (two met mast in this benchmark) in a complex terrain site in Portugal. The comparison also includes a metric defined by the authors that quantify the cost of running each model and their associated workflow: from the model’s setup to the AEP calculation. They evaluate if the accuracy skills and the costs predicted beforehand correspond with those obtained after performing the simulations with the goal of selecting the optimal skill/cost model for this site.
General comments:
The topic addressed by the authors is very relevant to the wind energy industry and their “skill/cost” idea is certainly interesting. However, there is a lot of missing information about the organization and structure of the benchmark, making it difficult to assess if the model's comparison and its findings are robust. There are also some caveats described below about the scoring process definitions that prevent me to recommend the publication of this work without some major revisions that can provide more confidence in the conclusions drawn in the paper in its current form.
First, I believe that the definition of the skill scores given “before” the simulations are not very clearly described in the manuscript. The reader is pointed to a previous work by the authors (Barber et al., 2022b), which was concluded with several suggested improvements and pending work related to the scoring conceptualization. However, those points don’t seem to be implemented for the analysis of the present study, or at least this manuscript doesn’t explain how the subjectivity in many of the scoring definitions can be mitigated.
Unlike the skills scores, some components of the cost scores are indeed more “quantifiable” (equation 1). Still, the values of some of them like the "cost of the staff training per project" can again be very subjective to the interpretation of the modeler. Other parameters such as the “hourly rate of the modeler” depend on the institution, country, etc. The cost scores assigned in this comparison can be biased towards the participants from countries with lower wages.
On the other hand, besides the challenge of finding an optimal skill/cost model, equally relevant for the wind resource assessment community is the correct usage of the selected model, especially those models very sensitive to the user’s expertise such as some research codes included in this comparison.In that regard, I find many technical details missing about the models' setup (see the specific comments), and for those details that are included, it is noticeable the important differences in their configurations beyond their different physics/numerics. For instance, one model includes the effects of the turbine wake (PCE-LES) while another one employed a long-term correction from ERA5 as input wind field (WAsP). Others included atmospheric stability (E-Wind) while others used time series instead of period averages (E-Wind, Fluent). While this is fine in benchmarking exercises such as the CREYAP series (Mortensen et al., 2015), it complicates the goal of the authors of finding an optimal WRA tool because it is very difficult to conclude if the differences in the skill (and costs) among the workflows are related to the model, its configuration or simply the methodology to compute the APE beyond simulating more or less directional sectors. Besides, this mix of factors would probably preclude extrapolating the optimal workflow found in this case study to other sites.
The description of the case study is also a bit superficial in section 2.1. In addition to the general description of the site and the measurement campaign, I would suggest that the authors could add more information about the met mast data preparation, filtering and selection. Useful information could be the data availability by direction from the three met masts and comments about the potential effects from the wake of the operating WTG on the mast measurements. The wind rose shown in Fig. 2. indicates less frequent but still important SW winds, for which the wake of the WTG is expected to occur.
It would be also great if the authors could add information about the atmospheric stability during the 3-month period considered for the benchmark. This information might help to explain some of the large errors obtained by some of the models. It is already interesting to see that mast 29 has a large deviation from the (neutral stability) logarithmic profile, whereas mast 25, located on the lee side of the hill, thus, with a more complex flow, has an excellent logarithmic fit.
Specific comments:line 56-60: The authors are mentioning the analysis carried out by Barber et al., 2020c. but this work is only available as WECD since that paper has been withdrawn.
line 72: By previous works are the authors referring to the Barber et al., 2022a-b articles instead of the Barber et al., 2020a?? Only the first ones compare several modeling tools on different sites.
line 81: This citation also points the reader to the withdrawn paper of Barber et al., 2020c. Wouldn't it be "Barber et al., 2022b" the right citation in this case?
Section 2.3:
As mentioned above, this section lacks many details about the model's technical setup. In the case of the RANS and LES-based models, information about their boundary conditions, especially the treatment of the ground, is critical for understanding and potentially allowing the repeatability of this work. Also very important is providing the values of the different models' constants used by the RANS models.
-the OpenFOAM model is set to match the wind speed and direction at the calibration mast at 100m, (line 212). Aren’t all the other models set to match the wind at 80m (line 117)?
Line 223. Change "The direction variable" for "the wind direction". And, where is this wind direction obtained from?
Line 223. Change "Wind speed variables" for "wind speed components"
Line 227. I think that the phrase “constant line values for some variables” is not clear about which variables are they referring to, and what "constant line" means in this context.
Line 246. Does the E-Wind workflow simulate 24 wind directions? or 12 as described in Table 1?
Figure 4. Despite that is it expected to have some small errors in the RANS and LES models, shouldn’t the WAsP model have no error at the calibration mast at 80m due to the way this model works??
Line 364. Is “respective met mast” the calibration mast?? This phrase is a bit confusing, so, it is not very clear how this normalization is done.
Figure 8. Is this the AEP at 80m height? Again, I’m not sure if I got how the AEP normalization was defined.
Figure 11. It seems that the mast 20 is missing.
The references are the same ones already included in the authors’ manuscriptCitation: https://doi.org/10.5194/wes-2022-114-RC2 -
AC2: 'Reply on RC2', Florian Hammer, 01 Jul 2023
The comment was uploaded in the form of a supplement: https://wes.copernicus.org/preprints/wes-2022-114/wes-2022-114-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Florian Hammer, 01 Jul 2023
Interactive discussion
Status: closed
-
RC1: 'Comment on wes-2022-114', Rogier Floors, 24 Feb 2023
The paper presents a case study of different models for the Perdigao campaign. It is a noble goal to quantify the accuracy of a model and the resources used to run that model, but unfortunately for me to trust the conclusions provided in the paper I lack important details about both the model setups and the way the 'costs' are calculated. I think the paper should stay clear from drawing too general conclusions about what model is most 'promising' and instead only present the very specific cases for which the models are validated (e.g. from mast to mast). So in it's current form I cannot recommend the paper for publication. I think major revisions are required to rethink the structure and/or allow other researchers to reproduce the results. I am afraid that with so many models it will be very hard to describe all of the setups, without making the paper 50 pages long. Perhaps a possibility is to put detailed model setups in the appendix. Alternatively, the text should be adapted so that the model setups are provided with enough level of detail to redo the simulations (see detailed comments below). I also think that In a study like this where no new theory is being presented (which is fine) it is particularly important that the data are openly available so that others can still benefit from the study. So I would suggest expanding the "Data availability." section, with more than just the repository of all Perdigao data.
l16: upper heights sounds a bit strange. I suggest higher heights or something similar.
l26: long-term wind resource extrapolation: I would call this long-term wind resource correction, the way it is written it seems like the long term wind resource needs extrapolation, but is the shorter term measurements that need to be extrapolated to a longer term climate.
l94: Will this turbine cause any wake effects? This is not discussed.
l149: Are all model outputting time series? Is this generic or which time series are you talking about here?
l160: topography -> orography (topography is usually defined to include the roughness of the terrain)
Table 1: Common application range: I would rather call this complex or non-complex instead of flat or non-flat. The linearized model will probably work fine in non-flat terrain as long as no flow seperation occurs.
l177: Who is 'our' here?
l187: terrain topography -> orography
l190: Corine Land Cover (European Union, 2018) database: Did you use raster or vector data? Which projection was used? Which datum?
l190: "The stem size and distribution reproduce the same canopy frontal solidity (Monti et al., 2019; Nepf,
190 2012) of the actual vegetation at the site extracted from the Corine Land Cover (European Union, 2018) database".
This is not clear: what is the actual vegetation at the site? How can you get that from CORINE data which is just a satellite based product?
How can that match the stem size and distribution?
PCE-LES: which source did you use for the terrain elevation?
Section 2.3: after reading this I was expecting all modellers use the same terrain elevation, but based on l198 I start to doubt that because there SRTM is mentioned. In the LES section no source is mentioned.
l215: Scaled how? You mean assuming the wind distribution for the three months is representative for the whole year?
l218: That reference does not really describe the WAsP stability model. Better to cite Troen and Petersen (1989). What stability setting are used in the end?
l219: What was the source of this tiff data?
l220: What is in the .map file? What was the source?
l221: There is several roughness tables in that reference? Which one was used?
l223: The direction variable: you mean wind direction? From which height? What is NCA?
l225: Which 'data'?
l225: References for MERRA and ERA5 missing
l226: Coefficient of determination between what and what?
l226: How do you define "very similar"?
l226: "some basic filtering", "constant line values", "some variables": specify what filtering, what is a constant line value, which variables?
l229: Section on stability: this can also be made quantitative.
l229: What kind of adjustment?
l230: What was optimized with respect to what?
l232: Please specify in more detail what kind of long-term correction you did.
l246: what was the upper domain boundary?
l246: Is the first bin centered around north or from 0 to 15 degrees?
l247: How do you define the wind shear?
l255: Similar as my previous comment: so then you assume the 3 month wind measurements are representative of the full year? Thath is fine, but it is inconsistent with the previous model setup (Windpro), where you apply a long term correction.
l261: roughness height -> roughness length
l265: Is the roughness length varying with wind direction sector?
l267: grid independence study -> I assume the conclusion of the grid independence study was that the simulations were not dependent on resolution? Why was the resolution of 15 m optimal? In which sense was it optimal?
l285-l289: Taking the mean of a RMSE is mixing different errors metrics. You should calculate the squared errors from each sector and do the root-mean in the last step?
l292: m/s should be in normal font
l302: it would be good to mention here that this is the calibration point.
l318: This is not that surprising because both masts are located on top of a hill. It would be useful to relate this to the "most similar predictor" discussion in https://wes.copernicus.org/articles/5/1679/2020/.
Fig 4: I am bit confused how big the errors are at 80 m. Wasn't that used for calibration? How can there be already RMSE of up to 0.4 m/s?
Sect 3.1.2: This section is hard to understand ; what is the main message? The relative costs and skill score appear suddenly in Fig 7, but it feels like some background on the numbers should be available (appendix?). As discussed the costs and skill are extremely hard to quantify, so you could end up with any ranking of the models here. I would avoid drawing conclusions like "Taking the cost scores into account, E-Wind is the most suitable tool for the Perdigão site.".
l370: mast 19? You mean 29?
l380: end of line: height.
Fig 8: Are we comparing AEP at the same heights here? That should be added somewhere.
l386: What is an AEP by sector? I only know about an AEP as the production for a year, i.e. for all sectors combined.
l423: It is for me again not quite clear how this is quantified. I would leave out generalizations like this and just discuss model differences. How does a single AEP prediction from one mast to the other make this the model the best for entire Perdigao site?
Fig 11: Where is mast 20?
Sect 3.3: I agree there is so many differences in the difference model chains to calculate AEP that it is impossible to say what it is the exact reason. If mast 29 is used for calibration one would not expect any model error in AEP? So I would just leave this section out.
l454: It would be a very surprising conclusions if the AEP error did not depend on wind speed error. What about air density? How has that been calculated in the different model chains?References:
Troen, I., & Lundtang Petersen, E. (1989). European Wind Atlas. Risø National Laboratory.Citation: https://doi.org/10.5194/wes-2022-114-RC1 -
AC1: 'Reply on RC1', Florian Hammer, 01 Jul 2023
The comment was uploaded in the form of a supplement: https://wes.copernicus.org/preprints/wes-2022-114/wes-2022-114-AC1-supplement.pdf
-
AC1: 'Reply on RC1', Florian Hammer, 01 Jul 2023
-
RC2: 'Comment on wes-2022-114', Anonymous Referee #2, 28 Feb 2023
This work describes a comparison of different wind flow models’ accuracy to extrapolate the wind speed, and its associated Annual Energy Production, from one met mast to another location (two met mast in this benchmark) in a complex terrain site in Portugal. The comparison also includes a metric defined by the authors that quantify the cost of running each model and their associated workflow: from the model’s setup to the AEP calculation. They evaluate if the accuracy skills and the costs predicted beforehand correspond with those obtained after performing the simulations with the goal of selecting the optimal skill/cost model for this site.
General comments:
The topic addressed by the authors is very relevant to the wind energy industry and their “skill/cost” idea is certainly interesting. However, there is a lot of missing information about the organization and structure of the benchmark, making it difficult to assess if the model's comparison and its findings are robust. There are also some caveats described below about the scoring process definitions that prevent me to recommend the publication of this work without some major revisions that can provide more confidence in the conclusions drawn in the paper in its current form.
First, I believe that the definition of the skill scores given “before” the simulations are not very clearly described in the manuscript. The reader is pointed to a previous work by the authors (Barber et al., 2022b), which was concluded with several suggested improvements and pending work related to the scoring conceptualization. However, those points don’t seem to be implemented for the analysis of the present study, or at least this manuscript doesn’t explain how the subjectivity in many of the scoring definitions can be mitigated.
Unlike the skills scores, some components of the cost scores are indeed more “quantifiable” (equation 1). Still, the values of some of them like the "cost of the staff training per project" can again be very subjective to the interpretation of the modeler. Other parameters such as the “hourly rate of the modeler” depend on the institution, country, etc. The cost scores assigned in this comparison can be biased towards the participants from countries with lower wages.
On the other hand, besides the challenge of finding an optimal skill/cost model, equally relevant for the wind resource assessment community is the correct usage of the selected model, especially those models very sensitive to the user’s expertise such as some research codes included in this comparison.In that regard, I find many technical details missing about the models' setup (see the specific comments), and for those details that are included, it is noticeable the important differences in their configurations beyond their different physics/numerics. For instance, one model includes the effects of the turbine wake (PCE-LES) while another one employed a long-term correction from ERA5 as input wind field (WAsP). Others included atmospheric stability (E-Wind) while others used time series instead of period averages (E-Wind, Fluent). While this is fine in benchmarking exercises such as the CREYAP series (Mortensen et al., 2015), it complicates the goal of the authors of finding an optimal WRA tool because it is very difficult to conclude if the differences in the skill (and costs) among the workflows are related to the model, its configuration or simply the methodology to compute the APE beyond simulating more or less directional sectors. Besides, this mix of factors would probably preclude extrapolating the optimal workflow found in this case study to other sites.
The description of the case study is also a bit superficial in section 2.1. In addition to the general description of the site and the measurement campaign, I would suggest that the authors could add more information about the met mast data preparation, filtering and selection. Useful information could be the data availability by direction from the three met masts and comments about the potential effects from the wake of the operating WTG on the mast measurements. The wind rose shown in Fig. 2. indicates less frequent but still important SW winds, for which the wake of the WTG is expected to occur.
It would be also great if the authors could add information about the atmospheric stability during the 3-month period considered for the benchmark. This information might help to explain some of the large errors obtained by some of the models. It is already interesting to see that mast 29 has a large deviation from the (neutral stability) logarithmic profile, whereas mast 25, located on the lee side of the hill, thus, with a more complex flow, has an excellent logarithmic fit.
Specific comments:line 56-60: The authors are mentioning the analysis carried out by Barber et al., 2020c. but this work is only available as WECD since that paper has been withdrawn.
line 72: By previous works are the authors referring to the Barber et al., 2022a-b articles instead of the Barber et al., 2020a?? Only the first ones compare several modeling tools on different sites.
line 81: This citation also points the reader to the withdrawn paper of Barber et al., 2020c. Wouldn't it be "Barber et al., 2022b" the right citation in this case?
Section 2.3:
As mentioned above, this section lacks many details about the model's technical setup. In the case of the RANS and LES-based models, information about their boundary conditions, especially the treatment of the ground, is critical for understanding and potentially allowing the repeatability of this work. Also very important is providing the values of the different models' constants used by the RANS models.
-the OpenFOAM model is set to match the wind speed and direction at the calibration mast at 100m, (line 212). Aren’t all the other models set to match the wind at 80m (line 117)?
Line 223. Change "The direction variable" for "the wind direction". And, where is this wind direction obtained from?
Line 223. Change "Wind speed variables" for "wind speed components"
Line 227. I think that the phrase “constant line values for some variables” is not clear about which variables are they referring to, and what "constant line" means in this context.
Line 246. Does the E-Wind workflow simulate 24 wind directions? or 12 as described in Table 1?
Figure 4. Despite that is it expected to have some small errors in the RANS and LES models, shouldn’t the WAsP model have no error at the calibration mast at 80m due to the way this model works??
Line 364. Is “respective met mast” the calibration mast?? This phrase is a bit confusing, so, it is not very clear how this normalization is done.
Figure 8. Is this the AEP at 80m height? Again, I’m not sure if I got how the AEP normalization was defined.
Figure 11. It seems that the mast 20 is missing.
The references are the same ones already included in the authors’ manuscriptCitation: https://doi.org/10.5194/wes-2022-114-RC2 -
AC2: 'Reply on RC2', Florian Hammer, 01 Jul 2023
The comment was uploaded in the form of a supplement: https://wes.copernicus.org/preprints/wes-2022-114/wes-2022-114-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Florian Hammer, 01 Jul 2023
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
566 | 221 | 35 | 822 | 33 | 25 |
- HTML: 566
- PDF: 221
- XML: 35
- Total: 822
- BibTeX: 33
- EndNote: 25
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1