the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
The Impact of Climate Change on Extreme Winds over Northern Europe According to CMIP6
Abstract. We study the possible effect of climate change on the extreme wind over Northern Europe using data from 18 models of the Sixth Phase of the Coupled Model Intercomparison Project (CMIP6) and the high-emission SSP585 scenario. We use the spectral correction method to correct the 6-hourly wind speeds and calculate the 50-year wind at an equivalent temporal resolution of 10 minutes, consistent with the International Electrotechnical Commission (IEC) standard. We assess the quality of the CMIP6 wind data during the historical period through comparison to the spatial patterns of the extreme wind in three reanalysis data. We obtain the possible effect of climate change through the comparison of the extreme wind parameters, including the 50-year wind and the 95 %-percentile of the wind speed, and the change in turbine class at 50 m, 100 m and 200 m, between a near future period (2020–2049) and the historic period (1980–2009). The analysis shows an overall increase in the extreme winds in the North Sea and the southern Baltic Sea, but a decrease over the Scandinavian Peninsula and most of the Baltic Sea. However, the analysis is inconclusive to whether higher or lower classes of turbines will be installed in this area in the future.
- Preprint
(1546 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Review of wes-2022-102', Anonymous Referee #1, 16 Feb 2023
The authors analyze the effect of climate change on extreme winds in Europe. The analysis is based on a 18 member subset of the CMIP6 ensemble that had previously been used in a related study by Hahmann et al. The authors compare near future (up to 2050) to historical extreme winds, using a temporal downscaling method to compute the 10 min wind speed extreme with a recurrence time of 50 years from 6 hourly climate model outputs. The authors report an increase in the North Sea and parts of the Baltic and reductions over the Scandinavian Peninsula, as well as generally low signal-to-noise ratios.
The topic appears highly relevant both in terms of wear & tear and turbine selection. However, I have substantial concerns about the viability of the chosen approach. In addition to the methodological concerns, I find that the paper is very difficult to follow because new methods are introduced on the fly, often in very imprecise terms, making it difficult to understand what exactly happens. In its current form, I doubt that anyone but the authors would be able to reproduce the results. Unless the paper is substantially improved along the issues outlined below, I suggest to reject it.
Below, I provide a list of concerns (roughly decreasing in importance):
1) The approach heavily and non-linearly relies on the spectrum in the sub-daily part. However, the chosen input data only has coarse temporal resolution of 6h. The authors fill the high frequency part of the spectrum using a simple equation (Eq. 3) with one free parameter. There is no evidence or supporting analysis that would justify this approach in the context of using global climate model output. I think there is a risk that a large part of the results is an artifact of this methodological choice.
One way to provide evidence that this approach is solid would be to test your spectral tail correction with the real data of those models that provide higher output frequencies (or by artificially reducing CMIP6 output frequency and checking whether your approach reproduces the real spectrum). Moreover, regional climate model simulations would be an obvious alternative since they provide higher resolution in space and time.
2) The authors do not bias correct the climate models and they do not provide any explanation why not.
3) The authors do not thorougly discuss ensemble agreement, do not quantify significance, and do not compare to background climate variability. That is, they fail to address essential elements of any climate change impact study.
4) The method is insufficiently documented. Examples:
- Authors state that wind speeds above ground are computed using the logarithmic height profile but details are unclear. Do they use surface rougness from the individual models to do so? Do they include changes in surface roughness due to land use change?
- Authors do not explain how the value of a in Eq. 3 is derived although it is central to the analysis. Is that a fit? Looking at Fig.1, your approach matches the data fairly badly.
- You replace the spectrum for frequencies higher than 0.8 per day. Why? Your data goes up to 4 per day.
- You mention Q95 for the first time in your results. How do you define it? On the raw data? Or using your correction?
- Out of the four criteria that you define, the first two appear too broad. The range in (a) is huge and stronger winds over oceans as compared to land (b) are also fairly trivial. As you can see in your Table 2, all models score on criteria 1 & 2.
- Annual/Periodic Maximum Method and Peak over Threshold Method first mentioned in Results and not introduced
- Why do you compare the CMIP6 data to the FINO obs while not comparing the reanalyses to obs?
- Deep into the Results, you mention that you regrid the CMIP6 data to the grid of model 5 using nearest-neighbor. First, this info belongs into the Methods. Second, please provide justification for these choices. What is the rationale to choose a grid that sits in the middle of the range? Why not choose the grid of a different model, normally the one with the coarsest resolution? Also, nearest neighbor interpolation creates artifacts near the coast which can be really essentially if water masses are small compared to model resolution like in the Baltic Sea. Please justify.
- You introduce r on the fly and from what I understand it is the relative change in the corrected U50 in the ensemble mean. You then call r greater than 5% a "significant increase". This wording is confusing because it carries the meaning of a statistical significance test which you do not perform (even though you probably should give the large spread between models)5) What is the role of spatial aggregation? CMIP6 model resolution is quite coarse (grid box size around 10 000 km^2), and you investigate extremes, don't you need to disaggregate in space as well?
6) Attribution: Many of your results are perfectly compatible with climate variability being the cause for the change instead of climate change. For example, Fig. 4 shows that roughly half of the models show weak increase while the other half shows weak decreases. You don't provide any evidence in terms of processes or comparison to background variability to justify the conclusion that your results actually represent climate change. The same is true when looking at Table 3, in particular for those models that score highly (i.e. Group II). Changes in the mean are essentially zero and the standard deviation is 1 to 2 orders of magnitude larger than the change.
7) In your Discussion you write: "There are systematic and consistent patterns for increased and decreased extreme
winds that can be identified in certain regions, for both U50 and Q95, even though we are using different groups of data from the 18 CMIP6 models." Could you please point to exact figure and/our table to provides evidence for this claim? From what I see, this conclusion is incompatible with your results (see e.g., my comment 6 above).8) Similarly, you then write: "over the entire study domain, an overall decrease in U50 and Q95; the largest model group (about 40%) suggests no considerable change, 20% of models suggests significant increase and a slightly
smaller number of models suggests significant decrease". How can you flag an overall decrease in U50 and Q95 as one of your main results if the majority of models (40%) suggests no change and the others do not agree on the sign of change? This sentence contradicts itself.9) You say that the results do "not show any dependence on the spatial resolution of the CMIP6 data". This is a very strong statement. What part of your analysis backs this statement up?
10) Code availability only points to retrieval of CMIP6 winds developed by Andrea Hahmann for a separate paper. I can't see any code for this analysis.
11) The paper is heavily self-referential. I count 15 reference to own work.
12) CMIP6 subselection: I believe to remember that the choice in Hahmann et al. was also motivated by the need for air density to compute wind energy density. This criterion would not seem to matter here so why not add more models?
Minor:Review of previous publications (roughly lines 23 - 37) mostly lists publications. It doesn't summarize and/or contextualize the reported results.
You are citing an industry study from 2011 as evidence for "damage to society from extreme winds is rising every year". This study is 12 years old. How can it provide evidence that damages rise every year?
I find your Group names difficult to memorize and I would also suggest to order them differently. Currently Group I is least strict (all models), Group II is most strict and Group III is somewhere in between.
"based on infrastructure known from the" --> Replace infrastructure with a more appropriate word like data or so
"Exemplified for the time series in Fig. 1" --> Fig. 1 shows a spectrum, not a time series.
Citation: https://doi.org/10.5194/wes-2022-102-RC1 - AC1: 'Reply on RC1', Xiaoli Larsén, 28 Mar 2023
-
RC2: 'Comment on wes-2022-102', Anonymous Referee #2, 22 Feb 2023
Review of “The Impact of Climate Change on Extreme Winds over Northern Europe According to CMIP6” by Larsén et al.
This study investigated future changes in extreme winds over Northern Europe by analyzing CMIP6 model results. The authors used a method called “spectral correction” to obtain wind speed data at 10-minute interval using 6-hourly model outputs, and then calculated the 50-year wind. By comparing historical simulations and future projections, they discussed possible changes in extreme winds in the region. This research covers an important topic, and it has some merits. However, I have a few major concerns about the methodology and the statistical significance of the results. I recommend a major revision and have listed my specific comments below.
1. Influences from natural climate variability. The authors compared the CMIP6 historical simulations with reanalysis data over the period 1980-2009 to assess the model performance. However, the observational results during 30-year period are likely affected by both anthropogenic forcing and decadal to inter-decadal natural climate variability. But phases of natural climate variabilities are randomly distributed in the models and therefore not synchronized with observations. In particular, given the relatively small sample size (6 models selected), comparisons between the model and observational results may be strongly affected by low-frequency natural climate variabilities (such as Pacific Decadal Oscillation, Atlantic Multidecadal Oscillation). Similarly, comparing future projections during 2020-2049 with historical simulations may not necessarily isolate the climate change effect, either.
2. Since CMIP models still exhibit biases in simulating many aspects of the global climate system, it is important to select the ones that can better simulate the subject of the research. However, the criteria used to evaluate the CMIP6 model performance (L161-168) seem a bit subjective to me. I would suggest to try using spatial correlation coefficient to assess the model-data agreement, and I wonder if how this may affect the conclusions in this study.
3. I find that there is overall lack of statistical significance test in the results. For instance, L204-209 uses relative changes in the wind speed to determine whether the changes are significant. But even if a change is small in magnitude, it still can be significant as long as the signal is large compared to the noise. I would suggest to perform either a student’s t test or show model agreement in the figures to better demonstrate what signals are significant, which may provide more useful information.
4. L179-190: Here the authors compared their results with FINO masts, but I think more information about the data is needed to help the readers better understand the results, such as locations, observational periods etc.Citation: https://doi.org/10.5194/wes-2022-102-RC2 - AC2: 'Reply on RC2', Xiaoli Larsén, 28 Mar 2023
Status: closed
-
RC1: 'Review of wes-2022-102', Anonymous Referee #1, 16 Feb 2023
The authors analyze the effect of climate change on extreme winds in Europe. The analysis is based on a 18 member subset of the CMIP6 ensemble that had previously been used in a related study by Hahmann et al. The authors compare near future (up to 2050) to historical extreme winds, using a temporal downscaling method to compute the 10 min wind speed extreme with a recurrence time of 50 years from 6 hourly climate model outputs. The authors report an increase in the North Sea and parts of the Baltic and reductions over the Scandinavian Peninsula, as well as generally low signal-to-noise ratios.
The topic appears highly relevant both in terms of wear & tear and turbine selection. However, I have substantial concerns about the viability of the chosen approach. In addition to the methodological concerns, I find that the paper is very difficult to follow because new methods are introduced on the fly, often in very imprecise terms, making it difficult to understand what exactly happens. In its current form, I doubt that anyone but the authors would be able to reproduce the results. Unless the paper is substantially improved along the issues outlined below, I suggest to reject it.
Below, I provide a list of concerns (roughly decreasing in importance):
1) The approach heavily and non-linearly relies on the spectrum in the sub-daily part. However, the chosen input data only has coarse temporal resolution of 6h. The authors fill the high frequency part of the spectrum using a simple equation (Eq. 3) with one free parameter. There is no evidence or supporting analysis that would justify this approach in the context of using global climate model output. I think there is a risk that a large part of the results is an artifact of this methodological choice.
One way to provide evidence that this approach is solid would be to test your spectral tail correction with the real data of those models that provide higher output frequencies (or by artificially reducing CMIP6 output frequency and checking whether your approach reproduces the real spectrum). Moreover, regional climate model simulations would be an obvious alternative since they provide higher resolution in space and time.
2) The authors do not bias correct the climate models and they do not provide any explanation why not.
3) The authors do not thorougly discuss ensemble agreement, do not quantify significance, and do not compare to background climate variability. That is, they fail to address essential elements of any climate change impact study.
4) The method is insufficiently documented. Examples:
- Authors state that wind speeds above ground are computed using the logarithmic height profile but details are unclear. Do they use surface rougness from the individual models to do so? Do they include changes in surface roughness due to land use change?
- Authors do not explain how the value of a in Eq. 3 is derived although it is central to the analysis. Is that a fit? Looking at Fig.1, your approach matches the data fairly badly.
- You replace the spectrum for frequencies higher than 0.8 per day. Why? Your data goes up to 4 per day.
- You mention Q95 for the first time in your results. How do you define it? On the raw data? Or using your correction?
- Out of the four criteria that you define, the first two appear too broad. The range in (a) is huge and stronger winds over oceans as compared to land (b) are also fairly trivial. As you can see in your Table 2, all models score on criteria 1 & 2.
- Annual/Periodic Maximum Method and Peak over Threshold Method first mentioned in Results and not introduced
- Why do you compare the CMIP6 data to the FINO obs while not comparing the reanalyses to obs?
- Deep into the Results, you mention that you regrid the CMIP6 data to the grid of model 5 using nearest-neighbor. First, this info belongs into the Methods. Second, please provide justification for these choices. What is the rationale to choose a grid that sits in the middle of the range? Why not choose the grid of a different model, normally the one with the coarsest resolution? Also, nearest neighbor interpolation creates artifacts near the coast which can be really essentially if water masses are small compared to model resolution like in the Baltic Sea. Please justify.
- You introduce r on the fly and from what I understand it is the relative change in the corrected U50 in the ensemble mean. You then call r greater than 5% a "significant increase". This wording is confusing because it carries the meaning of a statistical significance test which you do not perform (even though you probably should give the large spread between models)5) What is the role of spatial aggregation? CMIP6 model resolution is quite coarse (grid box size around 10 000 km^2), and you investigate extremes, don't you need to disaggregate in space as well?
6) Attribution: Many of your results are perfectly compatible with climate variability being the cause for the change instead of climate change. For example, Fig. 4 shows that roughly half of the models show weak increase while the other half shows weak decreases. You don't provide any evidence in terms of processes or comparison to background variability to justify the conclusion that your results actually represent climate change. The same is true when looking at Table 3, in particular for those models that score highly (i.e. Group II). Changes in the mean are essentially zero and the standard deviation is 1 to 2 orders of magnitude larger than the change.
7) In your Discussion you write: "There are systematic and consistent patterns for increased and decreased extreme
winds that can be identified in certain regions, for both U50 and Q95, even though we are using different groups of data from the 18 CMIP6 models." Could you please point to exact figure and/our table to provides evidence for this claim? From what I see, this conclusion is incompatible with your results (see e.g., my comment 6 above).8) Similarly, you then write: "over the entire study domain, an overall decrease in U50 and Q95; the largest model group (about 40%) suggests no considerable change, 20% of models suggests significant increase and a slightly
smaller number of models suggests significant decrease". How can you flag an overall decrease in U50 and Q95 as one of your main results if the majority of models (40%) suggests no change and the others do not agree on the sign of change? This sentence contradicts itself.9) You say that the results do "not show any dependence on the spatial resolution of the CMIP6 data". This is a very strong statement. What part of your analysis backs this statement up?
10) Code availability only points to retrieval of CMIP6 winds developed by Andrea Hahmann for a separate paper. I can't see any code for this analysis.
11) The paper is heavily self-referential. I count 15 reference to own work.
12) CMIP6 subselection: I believe to remember that the choice in Hahmann et al. was also motivated by the need for air density to compute wind energy density. This criterion would not seem to matter here so why not add more models?
Minor:Review of previous publications (roughly lines 23 - 37) mostly lists publications. It doesn't summarize and/or contextualize the reported results.
You are citing an industry study from 2011 as evidence for "damage to society from extreme winds is rising every year". This study is 12 years old. How can it provide evidence that damages rise every year?
I find your Group names difficult to memorize and I would also suggest to order them differently. Currently Group I is least strict (all models), Group II is most strict and Group III is somewhere in between.
"based on infrastructure known from the" --> Replace infrastructure with a more appropriate word like data or so
"Exemplified for the time series in Fig. 1" --> Fig. 1 shows a spectrum, not a time series.
Citation: https://doi.org/10.5194/wes-2022-102-RC1 - AC1: 'Reply on RC1', Xiaoli Larsén, 28 Mar 2023
-
RC2: 'Comment on wes-2022-102', Anonymous Referee #2, 22 Feb 2023
Review of “The Impact of Climate Change on Extreme Winds over Northern Europe According to CMIP6” by Larsén et al.
This study investigated future changes in extreme winds over Northern Europe by analyzing CMIP6 model results. The authors used a method called “spectral correction” to obtain wind speed data at 10-minute interval using 6-hourly model outputs, and then calculated the 50-year wind. By comparing historical simulations and future projections, they discussed possible changes in extreme winds in the region. This research covers an important topic, and it has some merits. However, I have a few major concerns about the methodology and the statistical significance of the results. I recommend a major revision and have listed my specific comments below.
1. Influences from natural climate variability. The authors compared the CMIP6 historical simulations with reanalysis data over the period 1980-2009 to assess the model performance. However, the observational results during 30-year period are likely affected by both anthropogenic forcing and decadal to inter-decadal natural climate variability. But phases of natural climate variabilities are randomly distributed in the models and therefore not synchronized with observations. In particular, given the relatively small sample size (6 models selected), comparisons between the model and observational results may be strongly affected by low-frequency natural climate variabilities (such as Pacific Decadal Oscillation, Atlantic Multidecadal Oscillation). Similarly, comparing future projections during 2020-2049 with historical simulations may not necessarily isolate the climate change effect, either.
2. Since CMIP models still exhibit biases in simulating many aspects of the global climate system, it is important to select the ones that can better simulate the subject of the research. However, the criteria used to evaluate the CMIP6 model performance (L161-168) seem a bit subjective to me. I would suggest to try using spatial correlation coefficient to assess the model-data agreement, and I wonder if how this may affect the conclusions in this study.
3. I find that there is overall lack of statistical significance test in the results. For instance, L204-209 uses relative changes in the wind speed to determine whether the changes are significant. But even if a change is small in magnitude, it still can be significant as long as the signal is large compared to the noise. I would suggest to perform either a student’s t test or show model agreement in the figures to better demonstrate what signals are significant, which may provide more useful information.
4. L179-190: Here the authors compared their results with FINO masts, but I think more information about the data is needed to help the readers better understand the results, such as locations, observational periods etc.Citation: https://doi.org/10.5194/wes-2022-102-RC2 - AC2: 'Reply on RC2', Xiaoli Larsén, 28 Mar 2023
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
764 | 466 | 35 | 1,265 | 26 | 27 |
- HTML: 764
- PDF: 466
- XML: 35
- Total: 1,265
- BibTeX: 26
- EndNote: 27
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1