the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Vertical extrapolation of ASCAT ocean surface winds using machine learning techniques
Charlotte Bay Hasager
Ioanna Karagali
Abstract. The increasing demand for wind energy offshore requires more hub-height relevant wind information while larger wind turbine sizes require measurements at greater heights. In situ measurements are harder to acquire at higher atmospheric levels; meanwhile the emergence of machine-learning applications has led to several studies demonstrating the improvement in accuracy for vertical wind extrapolation over conventional power-law and logarithmic profile methods. Satellite wind retrievals supply multiple daily wind observations offshore, however only at 10 m height. The goal of this study is to develop and validate novel machine-learning methods using satellite wind observations and near-surface atmospheric measurements to extrapolate wind speeds to higher heights. A machine-learning model is trained on 12 years of collocated offshore wind measurements from a meteorological mast (FINO3) and space-bourne wind observations from the Advanced Scatterometer (ASCAT). The model is extended vertically to predict the FINO3 vertical wind profile. Horizontally, it is validated against the NORA3 meso-scale model reanalysis data. In both cases the model slightly over-predicts the wind speed with differences of 0.25 and 0.40 m s-1 respectively. An important feature in the model training process is the air-sea temperature difference, thus satellite sea surface temperature observations were included in the horizontal extension of the model, resulting in 0.20 m s-1 differences with NORA3. A limiting factor when training machine-learning models with satellite observations is the small finite number of daily samples at discrete times; this can skew the training process to higher/lower wind speed predictions depending on the average wind speed at the satellite observational times. Nonetheless, results shown in this study demonstrate the applicability of using machine learning techniques to extrapolate long-term satellite wind observations when enough samples are available.
Daniel Hatfield et al.
Status: final response (author comments only)
-
RC1: 'Comment on wes-2022-101', Anonymous Referee #1, 14 Dec 2022
-
AC1: 'Reply on RC1', Daniel Hatfield, 14 Feb 2023
The comment was uploaded in the form of a supplement: https://wes.copernicus.org/preprints/wes-2022-101/wes-2022-101-AC1-supplement.pdf
-
AC1: 'Reply on RC1', Daniel Hatfield, 14 Feb 2023
-
RC2: 'Comment on wes-2022-101', Anonymous Referee #2, 03 Jan 2023
The authors test a new model, based on machine learning (ML), to extrapolate ocean surface winds to hub heights of offshore wind masts, which is of relevance for the operation of offshore wind farms. Scatterometer ocean surface winds together with air-sea temperature differences appear the most important parameters for training the ML model. The model was trained for different time periods and the verification with independent data (not used for training) shows that the ML model outperforms an NWP model based on WRF.
General comments
================Although the authors have demonstrated that ML techniques can be used to extrapolate ocean surface winds to ~100 meter altitude, they have not demonstrated that the methodology outperforms the use of already available NWP models.
It would be very helpful to shortly outline current operational practice of operators of wind farms. ASCAT winds and SST are freely available, so a ML model using these two input parameters could be an interesting option for operators in case they have no access to actual mesoscale model data. But is that the case? Although WRF is freely available as well, the NEWA dataset is of limited use for operators as it needs ERA5 as hosting model. However, ERA5 availability is at best a couple of days behind real time, and as such the NEWA approach of limited value for daily operations.
In the context of the above, can the authors please explain the relevance of the use of NEWA in their study? Also given that NORA3 outperforms NEWA WRF (line 326)
The comparison against NEWA (WRF based) is not fair in the sense that NEWA does not make use of scatterometer winds explicitly (NEWA has no data assimilation), but only implicitly through the boundaries of the hosting model. A more fair comparison would be to use NORA3 in Table 5, because it outperforms NEWA as stated by the authors and probably makes explicit use of scatterometer winds in the reanalysis, although this was not mentioned by the authors.
Major comments
===============Although it was not the ultimate goal of the study to test if a model based on ML does outperform an NWP model, it is of importance to operators of offshore wind farms to know if ML outperforms a mesoscale NWP model. NORA3 represented the latter, so please add NORA3 to Table 5.
Table 5. The column denoted ’N’ shows for ML the number for “Concurrent data with ASCAT“ (although the numbers are not exactly the same with those in Table 3). This is misleading as the numbers in the other column (RMSE, bias, ..) are based on “Data used for validation”. Please use the correct numbers in Table 5.
Section 2.5. As a non-expert in ML techniques, this section was too abstract and hard to read and understand. It would help to relate the parameters in Table 2 to, X,Y and T in the text. A formula would help (see below). How does y_overbar relate to the parameters in Table 2? Is it wind speed at e.g. ~100m?
The paragraph on hyper-parameters and K-fold cannot be understood without any background knowledge of these techniques. For me it was totally unclear. I would remove it from the text.
The last paragraph in section 3.1 concludes that wind at height is mainly modelled through wind at the surface (WS) and the air-sea temperature difference (AT-SST). In formula: ML(j,FINOi) = a(i,j)WS + b(i,j)(AT-SST), with j denoting altitude and i the FINOi (I=1,2,3) station. The training set then aims to estimate a(i,j) and b(i,j). Is that right?
(See also remark to section 2.5 above).Line 426: “Results from this study show the prospect of applying machine-learning methods for the purpose of extrapolating surface
winds to higher atmospheric levels.”
I think this statement is too strong as the study does not show that RFM outperforms mesoscale models which assimilate ASCAT. Please correct.I can imagine a seasonal dependence of the ML model parameters for the different station locations. Was this tested? Please comment.
Minor moments
=============
Line 91. “This 12.5 km product has a standard deviation of 1.7 m s−1 and a bias of 0.02 m s−1 (Verhoef and Stoffelen, 2019).” I guess this is for wind speed, not the wind components? Please make clear in the text.Line 144; why 3x3 grid. Given the NORA3 2 km grid size and ASCAT 12.5 km product, I would expect 6x6, since 6*2=12, which is close to the 12.5 km ASCAT footprint. Please explain the choice for the number 3.
In Table 3, how were the “Data used for validation” selected? Randomly from the “Concurrent data with ASCAT”?
Figure 2, right panel. Why does the number 1148 differ from 1147 in Table 3? Please correct.
Figure 5.These numbers are based on validation data only, so N=1148 (or 1147), right?
Table 6. The number -0.003 in the 7-th column should be -0.004, in agreement with Table 5? Please check.
The numbers below Table 6 are not clear at all. 63% comes from 1.196 -> 1.949. Why is this relevant? 8% comes from 1.803 -> 1.949. This change is indeed important to report. The same issue applies to the numbers in “…. 65% or by 4% ……”.
Line 218: how do you arrive at 1% and “2% (Table 6). There is lots of information in Table 6, but not clear enough explained in the text.
Line 134: “while the structure and features of spatial variability in the wind field are not maintained”. Was that expected? Why?
Figure 4d. The title mentions: ‘ASCAT-NORA difference 100m’, which is confusing given that ASCAT is representative of 10m. Clearly, RFM based on ASCAT is meant. Please correct.
Line 340: the text: “The noticeable improvement of the model compared to NORA3 is evident” is misleading as it suggests that the RFM model outperforms NORA3, but this has not been assessed in the study (see comments on Table 5). What is meant is “…taking NORA3 as reference (truth) …”
Line 372: “vertical wind measurements”. “vertical wind” is confusing. What is meant is: wind profile or profile of (horizontal) wind. Please correct.
Typos
=====
Line 38: put space in “simulations(Karagali”
Line 45: predict -> predicting
Line 49: Deep Neutral Network -> Deep Neural Network?
Line 54: remove “Weather”
Line 66: great -> greater
Line 92: remove “for”
Line 94: Remove “from” or rephrase this sentence.
Line 126: remove “for”
Line 283: bias and RMSE should be reversed?
Line 298: winds speeds -> wind speeds
Line 320: extrapolatinglow, please use space in betweenCitation: https://doi.org/10.5194/wes-2022-101-RC2 -
AC2: 'Reply on RC2', Daniel Hatfield, 14 Feb 2023
The comment was uploaded in the form of a supplement: https://wes.copernicus.org/preprints/wes-2022-101/wes-2022-101-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Daniel Hatfield, 14 Feb 2023
Daniel Hatfield et al.
Daniel Hatfield et al.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
298 | 105 | 17 | 420 | 9 | 5 |
- HTML: 298
- PDF: 105
- XML: 17
- Total: 420
- BibTeX: 9
- EndNote: 5
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1