the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Linking weather patterns to observed and modelled turbine hub-height winds offshore U.S. West Coast
Abstract. The U.S. West Coast holds great potential for wind power generation, although its potential varies due to complex coastal processes. Characterizing and modelling turbine hub-height winds under different weather conditions are vital for wind resources assessment and management. This study uses a two-stage machine learning algorithm to identify five large-scale meteorological patterns (LSMPs): post-trough, post-ridge, pre-ridge, pre-trough, and California-high. The LSMPs are linked to offshore wind patterns, specifically at lidar buoy locations within lease areas for future wind farm development off Humboldt and Morro Bay. Distinct wind speed, wind direction, diurnal variation, and jet feature responses are observed for each LSMP and at both lidar locations. The wind speed at Humboldt is higher during the post-trough, pre-ridge, and California-high LSMPs and lower during the remaining LSMPs. Morro Bay has smaller responses in mean speeds, showing increased wind speed during the post-trough and California-high LSMPs. Besides the LSMPs, local factors, including the land-sea thermal contrast and topography, also modify mean winds and diurnal variation. The High-Resolution Rapid Refresh model analysis does a good job of capturing the mean and variation at Humboldt but produces large biases at Morro Bay, particularly during the pre-ridge and California-high LSMPs. The findings are anticipated to guide the selection of cases for studying the influence of specific large-scale and local factors on California offshore winds and to contribute to refining numerical weather prediction models, thereby enhancing the efficiency and reliability of offshore wind energy production.
- Preprint
(7390 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on wes-2024-76', Anonymous Referee #1, 26 Aug 2024
Review of “Linking weather patterns to observed and modelled turbine hub-height
winds offshore U.S. West Coast” by Liu et al.
General comments: This manuscript provides a very interesting analysis of turbine-height wind speeds observed by two floating lidar buoys in coastal California waters. The manuscript is well-written and easy to read, and the figures are all well-composed and enlightening. My comments are mostly minor, however, two issues are more substantial and may require non-trivial revisions.
The first has to do with the statement on line 100-101, which implies that a symmetrically paired set of nodes is not independent, and that a lack of independence is an undesirable feature. But is that true? Consider for example the statement that the authors make in the introduction about model errors being different for northerly wind versus southerly winds (lines 63-64) and how it is important to treat these two separately. The statement on line 100 suggests that having a symmetric northerly and southerly pair is undesired, in contradiction to the statement on line 63. Also, does applying k-means clustering to the SOMs then remove either the northerly or southerly SOM because they are not independent? Although the sequential combination of SOMs and K-means clustering sounds reasonable at first glance, it is not clear what this does in practice. The manuscript would be improved if the authors provided a description of what the procedure does using some real-world meteorological examples (Northerly vs southerly flow; onshore vs offshore flow; strong winds versus weak winds; etc). I also note that I cannot find this information in the 2023 paper by Liu et al.
Related to this, the combination of SOMs and K-means results in 5 LSMPs. If one only calculated 5 or 6 SOM nodes, would they give anything substantially different from these 5 LSMPs? To first order, I would expect them to be very similar. If the authors were to make this comparison, and find that the 5 or 6 SOM nodes are in fact substantially different from those from the combination procedure, then it would strongly support their contention that the two-step process is necessary. Without that test, I remain skeptical.
The second major issue has to do with Figure 4. This is a very nice figure, and very informative, and it helps to make a bit clearer the impacts of the two-step clustering process. However, I am surprised that the clusters are grouped as they are. For example, in Fig. 4a for Humboldt, the top-right K-means cluster has two very high wind speeds SOMs (red and orange squares) that are real outliers from the rest of the members. Likewise, the Morro Bay bottom right K-means group has two blue squares that are outliers. This implies that the SOM/K-means clustering method based on 500 MB geopotential, Psfc, and T2 is not always the best way to organize the data if one is interested in 80 m offshore winds. Would it make sense to run the process in reverse, and find 80 m wind speed SOMs/K-mean clusters and then find the corresponding large-scale weather patterns? Alternatively, is it possible to force the K-means clusters to have slightly modified SOM members such that these large outliers go into different K-means clusters?Specific comments:
Line 1: Would the title be more accurate if it said “Linking Large-Scale Weather Patterns …” since those are the only weather patterns investigated?
Line 13 “resource assessment”
Line 14: From symmetry, I would have expected that a “California Low” would also have been a LSMP. Why isn’t it the 6th LSMP?
Lines 40-43. An additional offshore reference that could be added here is Myers et al. 2024: Evaluation of Hub-Height Wind Forecasts Over the New York Bight. Wind Energy, https://doi.org/10.1002/we.2936.
Line 49. An additional reference here for HRRR biases is Bianco, L et al., 2019: Impact of model improvements on 80 m wind speeds during the second Wind Forecast Improvement Project (WFIP2), Geosci. Model Dev., 12, 4803–4821, https://doi.org/10.5194/gmd-12-4803-2019.
Line 54: circulations entail
Line 64-65: influencing the California offshore environment.Line 100: What kind of numbers are considered small here? SOM analyses typically use 10-30 nodes, are those considered to be small?
Lines 116, 118: A LLJ
Lines 121-123: “this study uses a 2 m s-1 fall-off threshold to define LLJs, without specifying the vertical distance between the jet core and the threshold height as long as it is within the observational limit of 240 m above MSL.” The authors should note that due to the height limitation of 240m that this definition will certainly underestimate the number of true LLJs.Line 141. I don’t always see this. For example, in Fig.1a, in the third row from the bottom the highs and lows are definitely rotating counter-clockwise from left to right.
Line 204: causing a wind direction change
Line 235: during the pre-ridge LSMP
Figure 5: Another very nice figure! The caption is a bit confusing however. Should the phrase “The line in the centre of each box indicates the mean value and the extends of the box indicate the …” say “The line in the centre of each bar indicates the mean value and the limits of the bars indicate the …?
Line 253-254: See previous comment for lines 121-123. This is more reason to state the limitation of the definition/data back on line 121-123.
Line 265: OK, here the LLJ limitation is acknowledged. I think it would be helpful to mention something about this back on lines 121-123 staring that more will be said about it later.
Citation: https://doi.org/10.5194/wes-2024-76-RC1 -
RC2: 'Comment on wes-2024-76', Anonymous Referee #2, 17 Sep 2024
This study uses Self-Organizing Maps (SOM) and K-means clustering to reduce data dimensionality and identify the key components of variables such as Z500, which describe different large-scale meteorological patterns (LSMPs) that produce varying wind patterns. For each LSMP, wind data from HRRR is evaluated against two in-situ lidar buoys, with biases documented and presented. I found the manuscript technically sound, with interesting and valid methods to present the results. Below are several suggestions that could help improve the motivation and discussion sections:
- In the introduction, you mentioned that the community relies on modeling data, but you didn’t explain why. A transition paragraph discussing the sparsity of observational data offshore is needed. This would provide context as to why models are essential and why it’s important to validate them.
- Although your focus is on validating HRRR, it might be worthwhile to mention other modeling datasets, especially since HRRR has a relatively short record. For instance, you could reference NOW-23 (offshore wind data developed by NREL) or the newly published Wind Toolkit (WTK-LED) by NREL. These datasets could also serve wind resource assessment purposes.
- The method you developed for validating HRRR model performance under different LSMPs can be applied to other model products as well. It might be helpful to mention this in the discussion to highlight the broader applicability of your approach.
- Finally, while this paper proposes a useful method for validating models beyond just examining overall mean wind speeds, it would be valuable to discuss the implications for industry and data users. How should they interpret the identified biases when using these data for wind farm development? What practical guidance can be offered?
Citation: https://doi.org/10.5194/wes-2024-76-RC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
284 | 86 | 20 | 390 | 15 | 13 |
- HTML: 284
- PDF: 86
- XML: 20
- Total: 390
- BibTeX: 15
- EndNote: 13
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1