Offshore wind energy forecasting sensitivity to sea surface temperature input in the Mid-Atlantic
- National Renewable Energy Laboratory, Golden, Colorado, USA
- National Renewable Energy Laboratory, Golden, Colorado, USA
Abstract. As offshore wind farm development expands, accurate wind resource forecasting over the ocean is needed. One important yet relatively unexplored aspect of offshore wind resource assessment is the role of sea surface temperature (SST). Models are generally forced with reanalysis data sets, which employ daily SST products. Compared with observations, significant variations in SSTs that occur on finer time scales are often not captured. Consequently, shorter-lived events such as sea breezes and low-level jets (among others), which are influenced by SSTs, may not be correctly represented in model results. The use of hourly SST products may improve the forecasting of these events. In this study, we examine the sensitivity of model output from the Weather Research and Forecasting Model (WRF) 4.2.1 to two different SST products—a daily, spatially coarser resolution data set (the Operational Sea Surface Temperature and Ice Analysis, or OSTIA), and an hourly, spatially finer resolution product (SSTs from the Geostationary Operational Environmental Satellite 16, or GOES-16). We find that in the Mid-Atlantic, although OSTIA SSTs validate better against in situ observations taken via a buoy array in the area, the two products result in comparable hub-height wind characterization performance on monthly time scales. Additionally, during flagged events that show statistically significant wind speed deviations between the two simulations, the GOES-16-forced simulation outperforms that forced by OSTIA.
Stephanie Redfern et al.
Status: final response (author comments only)
-
RC1: 'Comment on wes-2021-150', Anonymous Referee #1, 02 Mar 2022
In this study, Redfern et al. compare the performance of WRF simulations forced by two SST products with different temporal, spatial and assimilation characteristics. They use lidar measurements as well as buoy measurements to compare the performance of the simulations. The evaluation is made through different error measures. They conclude that both SST products give comparable results on monthly time-scale, while during more challenging events GOES-16 outperforms OSTIA for simulating winds at 100 m height.
General comments
The study is interesting and certainly relevant for wind energy, since good wind forecast during challenging situations is critical. However, I think the analysis could be improved in terms of evaluating different heights. Most of the analysis focuses on hub height wind, which is defined as 100 m. However, the appendix shows, how much the error measures vary with height. Discussing other heights in more detail would create a more complete picture both with the prospect that hub heights of turbines are expected to increase as well as with respect to obtain a more complete picture for the entire rotor area.
I was confused about the third SST product, MUR. It is not very well introduced and only turnes up as a surprise for approximately 13 lines in the paper. It is neither mentioned in the abstract, discussion and conclusion. The role of MUR should be more clear and a proper introduction is necessary or it should be left out entirely. See my specific comment in that regard below.
Specific comments* Line 39: You could add the following paper to your discussion, which addresses the LLJ frequency in the area that you are investigating:
Aird, Jeanie A., Rebecca J. Barthelmie, Tristan J. Shepherd, and Sara C. Pryor. 2022. "Occurrence of Low-Level Jets over the Eastern U.S. Coastal Zone at Heights Relevant to Wind Energy" Energies 15, no. 2: 445. https://doi.org/10.3390/en15020445
* Line 81: Please add some more statistics about the availability of the buoys and lidars
* Line 90: In line 69 you write about two model simulations "We run two model simulations with identical setups, aside from the input SST data, off the Mid-Atlantic coast for June and 70 July of 2020.", while here you write about three simulations. "We compare how well three different SST datasets validate against buoy observations and subsequently select the two best- performing data sets to force our simulations (Table 3). Aside from these different SST product inputs, the rest of the model parameters in the simulations remain identical." In understand that you discard one SST product early on during you evaluation, but it is confusing as a reader to get this mixed information on how many simulations are performed. Also in table 3 there are only 2 products shown, which make it even more confusing. I see the first introduction to the third product, MUR, only during the analysis in section 3.1. I would suggest that you either always talk about three products or leave out the 3rd product. The least MUR should already be introduced in section 2.3.
* Line 154-156: Personally I don't like this style of summarizing the findings at the beginning of each section. It takes away the motivation to read the entire section. This is the same as in line 205-206.
* Line 161: Linked to my point in line 90: Why introduce the 3rd data set with a "significant coarser" resolution here? It seems a bit unnecessary, especially in line 172 it is already disregarded from further analysis.
* Line 171: According to Table 5 for Atlantic Shores, BIAS and EMD are rather comparable for MUR and the other products. You don't show the evaluation for the other stations for MUR. Based on Table 5, I find it a bit difficult to follow your assessment
* Line 179 - 181: I cannot follow your argument here: For instance, both for buoy 44017 and 44065 EMD, RMSE and Bias show worse performance for GEOS-16 than for OSTIA as well as compared to the average over all sites. Please clarify
* Line 187/188: It would be nice to see the boxplots to comprehend were your conclusions come from. If you don't feel there is a enough space to show all plots, it would be nice if you could add "(not shown)" to the text, so the reader does not continue searching for the plot related to that statement. This statement is also valid for line 171.
* Beginning of each section 3.3.1 - 3.3.3: To better understand the event, it would be good to have a description of the event at the beginning. This description could be what you have in line 225ff. This gives a good introduction to the event.
* Line 215-220: As you motivate yourself 100 m is only one height. In figure 8 you could easily also add the matrices for e.g. 150 m height next to the matrices for 100 m.
* Line 239: An improvement for GOES-16 compared to OSTIA of 0.02 is indeed a very small. I would remove that statement.
* Line 275: You show figures for wind direction in the above analysis, but you do not evaluate the performance of the different products in terms of wind direction or wind veer. Are they similar for those quantities?
Technical corrections
* Line 50: Please add citations for ERA5 and MERRA2
* Line 62: Please add a citation for WRF
* Figure 1: According to the WES guidelines: "A legend should clarify all symbols used and should appear in the figure itself, rather than verbal explanations in the captions (e.g. "dashed line" or "open green circles")". Please add a corresponding legend to the figure. Why are the leasing areas in different colours?
* Table 2: Please consider to upload the wps and wrf namelists to a repository (e.g. zenodo) so the study becomes better reproducible
* Line 93: Please add a citation for OSTIA
* Line 98: Please add a citation for GOES-16
* Line 113: Please add a citation for DINEOF
* Figure 3: Are the buoys in a particular order? If not I would suggest to order them in an ascending order
* Figure 4: It seems like you are not using the same bins for all of the three PDFs. This makes it difficult to compare the PDFs.
* Figure 5: Why is 80 m highlighted although you mention in the text (line 191) that you consider 100 m as hub-height?
* Figure 8: In contrast to figure 3 you did not reverse the colormap for correlation. So for figure 8 yellower colours are better for correlation, but worse for RMSE and EMD. I suggest to have better performance in the same colour for all matrices (this also goes for figure 11 and 14). Please show two significant digits for the bias for GOES-16, even if it is 0
* Line 240: "although both present values very close to 0 m s-1" <- "are" is missing
* Figure A1 - A3: hub height line is at 80 m instead of 100 m as stated in the description -
RC2: 'Comment on wes-2021-150', Anonymous Referee #2, 21 Apr 2022
General Comments:
The paper of Redfern et al. addresses the topic of the role of SST representation in modelling wind energy forecast at offshore in the Mid-Atlantic. They compare the model and the observation wind speeds for the selected shorter-lived events (e.g., sea breezes and low-level jets). They examine the impact of SST on wind forecasting by using the OSTIA (daily) and GOES-16 SST (hourly) products in the model configuration.
The topic is of high interest in the context of studying the relationship between SST and offshore wind energy, and the paper is well-written. However, some clarifications and improvements are needed before the manuscript is publishable in the Wind Energy Source journal. Although the revision is somewhere between major and minor, I would like the authors to address all of my comments and suggestions that are listed below:
Major Comments:
Clarity of the abstract: I found the abstract is easy to read and understand the methodology for a reader. However, I would give a little more details, such as the horizontal resolution of the GOES-16 and OSTIA fields, the studying periods (June and July), and indicating that the wind speed deviations during flagged events are at 100 m hub-height. Concerning results, I would try to write some numerical results (e.g., better performance of xx% or so).
The SST Performance: My feeling is that the SST analysis in this paper is not fully conducted on the event scale. Although the SST difference maps (e.g., Figure 9) are beneficial, they are not sufficient enough to track the daily fluctuations in SST at the event periods. I strongly suggest the authors show the time series of the SST products for all three lidars (not just the Atlantic Shores) and highlight the event dates on the time axis of these graphs. Additionally, the validation metrics of SST can be calculated for each month, separately. I believe that these changes will help the reader to link the wind and SST event-scale changes.
Then, I have other concerns. Discussion (and conclusion): what is the main take-home message? This point should be much more evident and clearer. I suggest that the authors can give more detail about the physical explanation of their outcomes (e.g., What might be the possible reason for these events correlate with the wind ramps?, Why do GOES-16 generally outperform OSTIA at different hub heights?). I would also compare the final results with similar studies and argue the uncertainties from several error sources (i.e., overall uncertainty in initial and boundary conditions, structural model uncertainty, etc.), and try to write some numerical results. In the conclusion, there is no need to explain the methodology in detail. A bullet point list may be helpful to summarize the main outcomes of the paper.
Minor Comments:
1 Introduction:
a) P2L30: “The cold pool forms during the summer…” needs a reference.
b) P3L62: The authors can use the acronym ‘NWP’ instead of repeating the numerical weather prediction.
2 Methods:
a) Why is August not included in this study? Wasn’t there any short-lived event during August 2020? The authors can explain the reason in the manuscript.
2.2 Model Setup:
a) P3L87: The resolution of the nested domain should be indicated in the text.
b) P4: 44008 station is not listed in Table 1.
c) P4: Please, indicate the horizontal resolution of the shown domain as well as the coordinates in Figure 1.
d) P4: The units of coordinates should be indicated in Table 1.
e) P5: The physics schemes references are missing in Table 2 (e.g., RRTMG, Kain-Fritsch etc).
f) Showing both domains (parent and nest) and their resolutions in one figure would be beneficial to explain the model setup.
2.3. Sea Surface Temperature Data:
a) P5L93: The OSTIA product needs a reference.
b) P6L113: The DINEOF needs a reference.
2.4 Event Selection:
a) Listing the event dates and simulation time in a table can help the reader to follow the methodology easily.
2.5 Validation Metrics:
a) Why was particularly 100m hub height used in the evaluation? The authors can explain the reason in the manuscript.
- Results:
3.1 Sea Surface Temperature Performance:
a) The focus of this study is the GOES-16 and OSTIA SST products and their performance on the Mid-Atlantic coast. The MUR SST data is limited to the Atlantic Shores and not much successful in terms of catching the in-situ measurements of SST compared to the other two data sets during June and July of 2020. Is this data set truly needed for the SST analysis? Why?
b) The correlation difference between the GOES-16 and OSTIA SST products in Figure 3 is not high. “Although GOES-16 follows the diurnal cycle rather than representing only the daily average SSTs, it still does not correlate with observations as well as the OSTIA” statement sounds like a strong judgment.
c) The EMD performances of the SST products (Figure 3) also should be argued in the text.
3.2 Monthly Wind Speeds:
a) “Additionally, in both simulations, whole domain winds in July tend to be significantly faster than June winds.” (P10L189) conflicts with the line “June average wind speeds are faster than those in July for both simulations.” (P22L283) in the discussion section.
b) Why are the modeled hub-height wind speed bias and correlation for simulations in Figure 5 only shown for June 2020, not also for July 2020? The authors should state the reason in the manuscript.
3.3. Event-Scale Wind Speeds:
a) P13L204: What are the criteria for the “little” in observational data?
3.3.1. June 21 – 22, 2020 Event:
a) P15L226: Grammer mistake? (“affect”)
3.3.2. July 10 – 11, 2020 Event:
a) P16L235: “(Fig.10(a)))” one parenthesis is extra.
** I suggest this reference concerning the sensitivity study of the WRF model (including the OSTIA SST) in offshore wind modeling in the Baltic Sea: https://doi.org/10.1016/j.gsf.2021.101229.
Stephanie Redfern et al.
Stephanie Redfern et al.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
331 | 113 | 10 | 454 | 3 | 5 |
- HTML: 331
- PDF: 113
- XML: 10
- Total: 454
- BibTeX: 3
- EndNote: 5
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1