The financing of a wind farm directly relates to the preconstruction energy yield assessments which estimate the annual energy production for the farm. The accuracy and the precision of the preconstruction energy estimates can dictate the profitability of the wind project. Historically, the wind industry tended to overpredict the annual energy production of wind farms. Experts have been dedicated to eliminating such prediction errors in the past decade, and recently the reported average energy prediction bias is declining. Herein, we present a literature review of the energy yield assessment errors across the global wind energy industry. We identify a long-term trend of reduction in the overprediction bias, whereas the uncertainty associated with the prediction error is prominent. We also summarize the recent advancements of the wind resource assessment process that justify the bias reduction, including improvements in modeling and measurement techniques. Additionally, because the energy losses and uncertainties substantially influence the prediction error, we document and examine the estimated and observed loss and uncertainty values from the literature, according to the proposed framework in the International Electrotechnical Commission 61400-15 wind resource assessment standard. From our findings, we highlight opportunities for the industry to move forward, such as the validation and reduction of prediction uncertainty and the prevention of energy losses caused by wake effect and environmental events. Overall, this study provides a summary of how the wind energy industry has been quantifying and reducing prediction errors, energy losses, and production uncertainties. Finally, for this work to be as reproducible as possible, we include all of the data used in the analysis in appendices to the article.
Introduction
Determining the range of annual energy production (AEP), or the energy yield
assessment (EYA), has been a key part of the wind resource assessment (WRA)
process. The predicted median AEP is also known as the P50, i.e., the AEP expected to be exceeded 50 % of the time. P50 values are often defined with timescales such as 1, 10, and 20 years. In this study, unless stated otherwise, we primarily discuss the 20-year P50, which is the typical expected lifespan of utility-scale wind turbines. For years, leaders in the field have been discussing the difference between predicted P50 and actual AEP, where the industry often overestimates the energy production of a wind farm (Hale, 2017; Hendrickson, 2009, 2019; Johnson et al., 2008). A recent study conducted by the researchers at the National Renewable Energy Laboratory (NREL) found an average of 3.5 % to 4.5 % P50 overprediction bias based on a subset of wind farms in the United States and accounting for curtailment (Lunacek et al., 2018).
Such P50 overestimation results in marked financial implications. Healer (2018) stated that if a wind project produces a certain percentage lower than the P50 on a 2-year rolling basis, the energy buyer, also known as the offtaker, may have the option to terminate the contract. For a 20-year contract, if a wind farm has a 1 % chance of such underproduction over a 2-year period, the probability of such an event taking place within the 18 2-year rolling periods is 16.5 %, as 100 % - (100 % - 1 %)18=16.5 % (Healer, 2018), assuming each 2-year rolling period is independent. Therefore, projects with substantial energy-production uncertainty experience the financial risk from modern energy contracting.
Mind map of energy-production loss, according to the IEC 61400-15
proposed framework. The blue and black rounded rectangles represent the
categorial and subcategorical losses, respectively. Details of each loss
category and subcategory are discussed in Table A1.
Random errors cause observations or model predictions to deviate from the truth and lead to uncertainty (Clifton et al., 2016), and uncertainty is quantified via probability (Wilks, 2011). In WRA, the P values surrounding P50 such as P90 and P95 characterize the uncertainty of the predicted AEP distribution. Such energy-estimate uncertainty depends on the cumulative certainty of the entire WRA process, from wind speed measurements to wind flow modeling (Clifton et al., 2016). When a sample of errors is Gaussian distributed, the standard deviation around the mean is typically used to represent the uncertainty of errors. Traditionally, the wind energy industry uses standard deviation, or σ, to represent uncertainty.
The WRA process governs the accuracy and precision of the P50, and a key
component in WRA constitutes the estimation of energy-production losses and
uncertainties. Wind energy experts have been using different nomenclature in
WRA, and inconsistent definitions and methodologies exist. To consolidate
and ameliorate the assessment process, the International Electrotechnical
Commission (IEC) 61400-15 working group has proposed a framework to classify
various types of energy-production losses and uncertainties (Filippelli et al., 2018, adapted in Appendix A). We illustrate the categorical and
subcategorical losses and uncertainties in Figs. 1 and 2. Note that the proposed framework is not an exclusive or exhaustive list of losses and uncertainties because some institution-specific practices may not fit into the proposed standard. Moreover, the proposed framework presented herein does not represent the final IEC standards, which are pending to be published.
Mind map of energy-production uncertainty, according to the IEC 61400-15 proposed framework. The purple and black rounded rectangles
represent the categorial and subcategorical uncertainties, respectively.
Details of each uncertainty category and subcategory are discussed in Table A2.
The wind energy industry has been experiencing financial impacts caused by
the challenges and difficulties in predicting energy-production losses and
uncertainties over the lifetime of a modern wind project, which can continue
to operate beyond 20 years:
an AEP prediction error of 1 GWh, e.g., because of the P50 prediction bias, translates to about EUR 50 000 to 70 000 lost (Papadopoulos, 2019);
reducing energy uncertainty by 1 % can result in USD 0.5 to 2 million of economic benefits, depending on the situation and the financial model (Brower et al., 2015; Halberg, 2017);
a change of 1 % in wind speed uncertainty can lead to a 3 % to 5 % change in net present value of a wind farm (Kline, 2019).
Experts in the industry have presented many studies on P50 prediction error, energy loss, and uncertainty for years, and the purpose of this literature review is to assemble previous findings and deliver a meaningful narrative. This article is unique and impactful because it is the first comprehensive survey and analysis of the key parameters in the WRA process across the industry. The three main research questions of this study include the following:
Is the industry-wide P50 prediction bias changing over time, and what are the reasons for the changes?
What are the ranges of different categories of energy-production losses and uncertainties?
Given our understanding on losses and uncertainties, what are the opportunities for improvements in the industry?
From past research, in addition to the energy-production uncertainties, we
review how the industry has been quantifying various wind speed uncertainties, particularly from wind measurements, extrapolation methods,
and modeling. We also compile and present the wind speed results herein.
We present this article with the following sections: Sect. 2 documents the data and the methodology of data filtering; Sect. 3 focuses on P50 prediction bias, including its trend and various reasons of bias improvement; Sects. 4 and 5, respectively, illustrate the energy-production loss and uncertainty, according to the IEC-proposed framework; Sect. 6 describes the numerical ranges of various wind speed uncertainties; Sect. 7
discusses the implications and future outlook based on our findings; Sect. 8 provides conclusions; Appendix A outlines the energy loss and uncertainty
frameworks proposed by the IEC 61400-15 working group; Appendix B compiles the data used in this analysis.
Data and methodology
We conduct our literature review over a broad spectrum of global sources. The literature includes the presentations at academic, industry, and professional conferences, particularly the Wind Resource and Project Energy Assessment workshops hosted by the American Wind Energy Association (AWEA) and WindEurope, as they are the key annual gatherings for wind resource experts. Additionally, we examine data from industry technical reports and white papers; publicly available user manuals of wind energy numerical models; technical reports from government agencies, national laboratories, and research and academic institutions; and peer-reviewed journal articles. Many of the literature sources originate in North America and Europe. Meanwhile, many of the regional corporations we cited in this article have become global businesses after mergers and acquisitions; hence, their presentations and publications can also represent international practices.
In most cases, we label the data source with the published year of the study, unless the author highlights a change of method at a specific time. For example, if an organization publishes a study in 2012 and reports their improvements on P50 prediction bias by comparing their “current” method with their “previous set of methodology before 2012”, the two P50 biases are recorded as 2012 and 2011, respectively. Moreover, for the same study that documents multiple P50 prediction errors in the same year, we select the one closest to zero, because those numbers reflect the state of the art of P50 validation of that year (Fig. 3). Accordingly, we use the paired P50 errors to indicate the effects from method adjustments (Fig. 4). To track the bias impact of technique changes from different organizations, we combine the closely related, ongoing series of studies from a single organization, usually by the same authors from the same institutions (each line in Fig. 4).
The trend of P50 prediction bias: (a) scatterplot of 63 independent P50 prediction error values, where R2 is the coefficient of determination and n is the sample size. Negative bias means the predicted AEP is higher than the measured AEP, and vice versa for positive bias. The solid black line represents the quadratic regression, the dark grey cone displays the 95 % confidence interval of the regression line, the light grey cone depicts the 95 % prediction interval, and the horizontal dashed black line marks the zero P50 prediction error. (b) As in panel (a) but only for 56 studies that use more than 10 wind farms in the analyses. The vertical violet bars represent the estimated uncertainty bounds (presented as 1 standard deviation from the mean) of the mean P50 prediction errors in 15 of the 56 samples. Table B1 summarizes the bias data illustrated herein. For clarity, the regression uses the year 2002 as the baseline; hence, the resultant regression constant, i.e., the derived intercept, is comprehensible.
Illustration of P50 bias changes over time after method
modifications in 17 studies. The dot and the cross, respectively, represent
the starting point and the finish point of the P50 prediction error because of method adjustments. The solid line indicates the P50 bias reduces after the method change, and the dotted line displays the opposite. The different colors are solely used to differentiate the lines and represent no meaning. The paired data are presented in Table B2.
We also derive the trend of P50 prediction errors using polynomial
regression and investigate the reasons behind such trend. We use the second-degree polynomial regression (i.e., quadratic regression) to analyze
the trend of the P50 prediction errors over time, and polynomials of higher degrees only marginally improve the fitting. We choose the polynomial
regression over the simple linear regression because the P50 prediction
errors are reducing towards zero with a diminishing rate, and we use quadratic polynomial over higher-order polynomials to avoid overfitting.
Additionally, in the regressions presented in this article (Figs. 3, 8, and C1), we present an estimated 95 % confidence interval, generated via bootstrapping with replacement using the same sample size of the data, which is performed through the regplot function in the seaborn Python library (Waskom et al., 2020). The confidence interval describes the bounds of the regression coefficients with 95 % confidence. Furthermore, we present the 95 % prediction interval in Fig. 3, which depicts the range of the predicted values, i.e., the P50 prediction bias, with 95 % confidence, given the existing data and regression model. The prediction interval is calculated using standard deviation, assuming an underlying Gaussian distribution. In short, the confidence interval illustrates the uncertainty of the regression function, whereas the prediction interval represents the uncertainty of the estimated values of the predictand (Wilks, 2011). In addition, we evaluate the regression analysis with the coefficient of determination (R2), which represents the proportion of the variance of the predictand explained by the regression.
For loss and uncertainty, we have limited data samples for certain categories because these data are only sparsely available. When a source does not provide an average value, we perform a simple arithmetic mean when both the upper and lower bounds are listed. For instance, when the average wake loss is between 5 % and 15 %, we project the average of 10 % in Fig. 6, and we present all the original values in Appendix B. If only the upper bound is found, then we project the data point as a maximum: the crosses in Fig. 6 are used as an example. We also use linear regression to explore trends in loss and uncertainty estimates.
We categorize the data to the best of our knowledge to synthesize a holistic
analysis. On one hand, if the type of loss and uncertainty from a source uses marginally different terminology from the IEC-proposed framework, we first attempt to classify it within the IEC framework, we gather other values in the same category or subcategory from the same data source, and we select the minimum and the maximum. As an illustration, if the total electrical losses from the substation and the transmission line are, respectively, 1 % and 2 %, we then label the total electrical loss with the range of 1 % to 2 %. On the other hand, when the type of loss and uncertainty illustrated in the literature largely differ from the IEC framework, we label them separately (Figs. 7 and 11). Because a few studies contrast wake loss and
nonwake loss, where nonwake loss represents every other type of energy loss,
we also include nonwake loss in this study (Figs. 6 and 10). When a type of uncertainty is recorded as simply “extrapolation” (seen in McAloon, 2010 and Walter, 2010), we label the value as both horizontal and vertical extrapolation uncertainties with a note of “extrapolation” in Tables B6 and B8. We also divide the reported losses and uncertainties into two groups, the “estimated” and the “observed”, where the former are based on simulations and modeling studies, and the latter are quantified via field measurements.
Unless specifically stated otherwise in Appendix B, we present a loss value as the percentage of production loss per year, and we document an uncertainty number as the single standard deviation in energy percentage in the long term, usually for 10 or 20 years. The wind speed uncertainty is stated as a percentage of wind speed in m s-1, and the uncertainty of an energy loss is expressed as percent of a loss percentage.
This article evaluates a compilation of averages, where each data point
represents an independent number. The metadata for each study in the literature vary, in which the resultant P50 prediction errors, losses, and uncertainties come from diverse collections of wind farms with different
commercial operation dates in various geographical regions and terrains.
Therefore, readers should not compare a specific data point with another. In
this study, we aim to discuss the WRA process from a broad perspective. Other caveats of this analysis include the potentially inaccurate classification of the data into the proposed IEC framework; the prime focus on P50 rather than P90, which also has a strong financial implication; and the tendency in the literature to selectively report extreme losses and uncertainties caused by extraordinary events, such as availability loss and
icing loss, which potentially misrepresents the reality. Our data sources are
also only limited to publicly available data or those accessible at NREL. We
perform a rigorous literature review from over 150 independent sources, and
the results presented in this article adequately display the current state
of the wind energy industry.
P50 prediction biasBias trend
We identify an improving trend of the mean P50 prediction bias, where the overprediction of energy production is gradually decreasing over time (Fig. 3), and the narrow 95 % confidence interval of the regression fit justifies the long-term trend. Such an improving trend is not strictly statistically significant (Fig. 3a), even after removing the studies based on small wind farm sample sizes (Fig. 3b). However, the R2 of 0.578 in Fig. 3b implies that over half of the variance in bias can be described by the regression, and less than half of the variance is caused by the inherent uncertainty between validation studies that does not change over time. The average bias magnitude also does not correlate with the size of the study, neither in wind farm sample size nor wind farm year length (not shown). Note that in some early studies, the reported biases measured in wind farm differ from those using wind farm year from the same source; we select the error closest to zero for each independent reference because the bias units are the same (Sect. 2).
The uncertainty of the average P50 prediction error quantified by the
studies remains large, in which the mean standard deviation is 6.6 % of the 15 data sources' reported estimated P50 uncertainty (violet bars in Fig. 3b). The industry started to disclose the standard deviations of their P50 validation studies in 2009, and it is becoming more common. With only 15 data points, we cannot identify a temporal trend of the uncertainty in P50 prediction bias. Even though the industry-wide mean P50 prediction bias is converging towards zero, the industry appears to
overestimate or underpredict the AEP for many individual wind projects.
Reasons for bias changes
To correct for the historical P50 prediction errors, some organizations
publicize the research and the adjustments they have been conducting for their WRA processes. We summarize the major modifications of the WRA procedure in Table 1. Most studies demonstrate mean P50 bias improvement over time (Fig. 4), and the magnitude of such bias reduction varies. In two studies, the authors examine the impact of accounting for windiness, which is the quantification of long-term wind speed variability, in their WRA methodologies. They acknowledge the difficulty in quantifying interannual wind speed variability accurately, and their P50 prediction errors worsen after embedding this uncertainty in their WRA process (vertical dashed lines in Fig. 4).
Categories of method adjustments to improve the wind resource
assessment process and the respective data sources.
Method changeSourceAccount for additional factors in wind resourceAWS Truepower (2009), Johnson (2012)assessment and operation, e.g.,– windiness or long-term correction of winddata,– suboptimal operation,– external wake effect, and– degradation of long-term meteorologicalmasts.Consider meteorological effects on power production,AWS Truepower (2009), Brower et al.e.g.,(2012), Elkinton (2013), Johnson (2012),– wind shear,Ostridge (2017)– turbulence,– air inflow angle, and– atmospheric stability.Improve modeling techniques, e.g.,Elkinton (2013), Johnson (2012),– turbine performance,Ostridge (2017), Papadopoulos (2019)– wind flow,– wake,– flow over complex terrain,– effects of changes in surface roughness, and– wind farm roughness.Improve in measurement and reduce in measurementAWS Truepower (2009), Johnson (2012),bias, e.g., adjust for dry friction whip of anemometers.Ostridge (2017), Papadopoulos (2019)Correct for previous methodology shortcomings, e.g.,Ostridge (2017), Papadopoulos (2019)– loss assumptions, and– shear extrapolation.Energy-production loss
The prediction and observation of production losses are tightly related to
the P50 prediction accuracy; hence, we contrast the estimated and measured losses in various categories and benchmark their magnitude (Figs. 5–7). The total energy loss is calculated from the difference between the gross energy estimate and the product of gross energy prediction and various categorical production efficiencies, where each efficiency is 1 minus a categorical energy loss (Brower, 2012). Of the total categorical losses, we record the largest number of data points from availability loss, and wake loss displays the largest variability among studies (Fig. 5). For availability loss, the total observed loss varies more than the total estimated loss and displays a larger range (Fig. 6a). The turbine availability loss appears to be larger than the balance of plant and grid availability losses; however, more data points are needed to validate those estimates (Fig. 6a). Except for one outlier, the turbine performance losses, in both predictions and observations, are about or under 5 % (Fig. 6b). Large ranges of environment losses exist, particularly for icing and degradation losses, which can drastically decrease AEP (Fig. 6c). Note that some of the icing losses indicated in the literature represent the fractional
energy-generation loss from production stoppages over atypically long periods in wintertime, rather than a typical energy loss percentage for a calendar year. Electrical loss has been assured as a routine energy reduction with high certainty and relatively low magnitude (Fig. 6d). Of all the categories, wind turbine wake results in a substantial portion of energy loss, and its estimations demonstrate large variations (Fig. 6e). The magnitude of estimated wake loss is larger than that of the predicted nonwake loss, which consists of other categorical losses (Fig. 6e). The observed total curtailment loss exhibits lower variability, yet with larger magnitude than its estimation (Fig. 6f). From the eight studies that report total loss,
the predictions range from 9.5 % to 22.5 % (Fig. 6g). We do not encounter any operational strategies loss under curtailment loss in the literature, and thus the subcategories in Fig. 6 do not cover every subcategory in Table A1.
Ranges of total energy-production losses in different categories,
according to the proposed framework of the IEC 61400-15 standard. Each blue dot and orange dot, respectively, represents the mean estimated loss and mean
observed loss documented in each independent reference. The losses are
expressed as percentage of AEP. The column of numbers on the right denotes
the sample size in each category, where the estimated ones are in blue and the observed ones are in orange. For clarity, the horizontal grey lines separate data from each category. Table B3 catalogs the categorical losses plotted herein.
Ranges of energy-production losses in different categories and
subcategories, according to the proposed framework of the IEC 61400-15
standard, except for nonwake in panel (e), which is an extra subcategory
summarizing other nonwake categories. Each blue dot and orange dot, respectively, represents the mean estimated loss and mean observed loss documented in each independent study. The blue and orange crosses, respectively, indicate the maximum of estimated loss and the maximum of observed loss reported, where the minima are not reported, and thus the averages cannot be calculated. The losses are expressed as percentage of AEP. The column of numbers on the right denotes the estimated and observed sample sizes in blue and orange, respectively, in each subcategory, and such sample size represents all the instances in that subcategory that recorded either the mean or the maximum loss values. For clarity, the grey horizontal lines separate data from each subcategory. Table B3 catalogs the categorical and subcategorical losses plotted herein.
Losses that inhibit wind farm operations can cause considerable monetary
impact. For example, blade degradation can result in a 6.8 % of AEP loss
for a single turbine in the IEC Class II wind regime, where the maximum annual average wind speed is 8.5 m s-1; this translates to USD 43 000 per year (Wilcox et al., 2017). Generally, the typical turbine failure rate is about 6 %, where 1 % reduction in turbine failure rate can lead to around USD 2 billion of global savings in operation and maintenance (Faubel, 2019). In practice, the savings may exclude the cost of preventative measures for turbine failure, such as hydraulic oil changes and turbine inspections.
We categorize two types of energy-production losses additional to the
proposed IEC framework, namely the first few years of operation and blockage
effect (Fig. 7). For the former loss, a newly constructed wind farm typically does not produce to its full capacity for the first few months or even for the first 2 years. The loss from the first few years of operation captures this time-specific and availability-related production loss. Regarding the later loss, the blockage effect describes the wind speed slowdown upwind of a wind farm (Bleeg et al., 2018). Wind farm blockage is not a new topic (mentioned in Johnson et al., 2008) and has been heavily discussed in recent years (Bleeg et al., 2018; Lee, 2019; Papadopoulos, 2019; Robinson, 2019; Spalding, 2019). Compared to some of the losses in Fig. 6, the loss magnitude of first few years of operation and blockage is relatively small, where it contributes to less than 5 % of AEP reduction per year (Fig. 7).
As in Fig. 6 but for the loss categories outside of the proposed IEC framework, as listed in Table B4.
For trend analysis, we linearly regress every subcategorical energy loss
(Fig. 6 and Table B3) on time, and we only find two loss subcategories demonstrate notable and statistically confident trends (Fig. 8). The measured curtailment loss and the observed generic power curve adjustment loss steadily decrease over time, and the reductions have reasonable R2 (Fig. 8). No other reported losses with a reasonable number of data samples display remarkable trends (Fig. C1).
Trend in observed energy-production loss: (a) total curtailment loss and (b) generic power curve adjustment loss. The annotations correspond to those in Fig. 3, where the orange solid line
represents simple linear regression, the light orange cone illustrates the
95 % confidence interval, R2 is the coefficient of determination, and
n is sample size.
Past research further documents the uncertainties of AEP losses. Except for
an outlier of measuring 80 % uncertainty in wake loss, the magnitude of
the uncertainty of wake loss is analogous to that of nonwake loss (Fig. 9). The industry also tends to reveal the uncertainty of wake loss than nonwake loss according to the larger number of data sources (Fig. 9). One data source reported that depending on the location, the operational variation from month to month can alter AEP losses for more than 10 % on average (Fig. 9). Note that the results in Fig. 9 represent the uncertainty of the respective
production loss percentages in Fig. 6 and Table B3, rather than the AEP uncertainty.
Uncertainty of energy-production losses, where the magnitude
corresponds to the AEP loss percentages listed in Fig. 6 and Table B3. Each dark blue dot, turquoise dot, and turquoise cross represents the estimated
uncertainty, the observed uncertainty, and the maximum observed uncertainty
of losses, respectively. The uncertainties are expressed as percentages of
uncertainty in terms of the energy-production loss percentage. The column of
numbers on the right denotes the estimated and observed sample sizes in dark
blue and turquoise, respectively, in each row, and such sample size represents all the instances in that row that reported either the mean or
the maximum values. For clarity, the grey horizontal lines separate data
from each uncertainty. Table B5 records the uncertainties displayed herein.
Energy-production uncertainty
The individual energy-production uncertainties directly influence the
uncertainty of P50 prediction. Total uncertainty is the root sum square of the categorical uncertainties; the assumption of correlation between
categories can reduce the overall uncertainty, and this is a typically consultant- and method-specific assumption (Brower, 2012). Except for a few outliers, the magnitude of the individual energy-production uncertainties across categories and subcategories is about or below 10 % (Fig. 10). The energy uncertainties from wind measurements range below 5 %, after omitting two extreme data points (Fig. 10a). The estimated long-term period uncertainty varies the most in historical wind resource (Fig. 10b), which indicates the representativeness of historical reference data (Table A2). Horizontal extrapolation generally yields higher energy-production uncertainty than vertical extrapolation (Fig. 10c and d). For plant performance, each subcategorical uncertainty corresponds to the respective AEP loss (Fig. 6 and Table A1). The range of the predicted energy uncertainty caused by wake effect is about 6 % (Fig. 10e). The estimated uncertainty of
turbine performance loss and total project evaluation period match with
those observed (Fig. 10e and f). Overall, the average estimated total uncertainty varies by about 10 %, whereas the observed total uncertainty appears to record a narrower bound, after excluding an outlier (Fig. 10g).
Ranges of energy-production uncertainties in different categories
and subcategories, according to the proposed framework of the IEC 61400-15
standard. The annotations correspond to those in Fig. 6, where each purple dot, green dot, and purple cross represents the mean estimated uncertainty, the mean observed uncertainty, and the maximum of estimated uncertainty from each independent reference, respectively. The uncertainties are expressed as percentages in AEP. The column of numbers on the right denotes the estimated and observed sample sizes in purple and green, respectively, in each subcategory, and such sample size represents all the instances in that subcategory that reported either the mean or the maximum uncertainty values. For clarity, the grey horizontal lines separate data from each subcategory. Table B6 numerates the production uncertainties.
In the literature, we cannot identify all the uncertainty types listed in the proposed IEC framework; hence, the following AEP uncertainty subcategories in Table A2 are omitted in Fig. 10: wind direction measurement in measurement;
on-site data synthesis in historical wind resource; model inputs and model
appropriateness in horizontal extrapolation; model components and model stress in vertical extrapolation; and environmental loss in plant performance.
Similar to energy losses, other types of AEP uncertainties not in the proposed IEC framework emerge. The magnitude of the uncertainties in Fig. 11 is comparable to the uncertainties in Fig. 10. The power curve measurement uncertainty in Fig. 11, specifically mentioned in the data sources, could be interpreted as the uncertainty from the turbine performance loss.
As in Fig. 10 but for the uncertainty categories outside of the proposed IEC framework, as listed in Table B7.
The energy-production uncertainty from air density and vertical extrapolation depends on the geography of the site. For instance, the elevation differences between sea level and the site altitude, as well as the elevation differences between the mast height and turbine hub height, affect the AEP uncertainty (Nielsen et al., 2010). For simple terrain, the vertical extrapolation uncertainty can be estimated to increase linearly with elevation (Nielsen et al., 2010). A common industry practice is to assign 1 % of energy uncertainty for each 10 m of vertical extrapolation, which could overestimate the uncertainty, except for forested locations (Langreder, 2017).
Wind speed uncertainty
Energy production of a wind turbine is a function of wind speed to its third
power. Considering wind speed, either measured, derived, or simulated, is a
critical input to an energy estimation model, the uncertainty of wind speed
plays an important role in the WRA process. We present various groups of wind speed uncertainties in the literature herein (Fig. 12). The bulk of the wind speed uncertainties are roughly 10 % or less of the wind speed. Many studies report estimated uncertainty from wind speed measurement; however, its magnitude and discrepancy among the sources are not as large as those from wind speed modeling or interannual variability (Fig. 12). Notice that some of the wind speed categories coincide with the IEC-proposed framework of energy uncertainty, and others do not. The absence of standardized classification of wind speed uncertainties increases the ambiguity in the findings from the literature and poses challenges to the interpretation of the results in Fig. 12. We also lack sufficient samples of measured wind speed uncertainties to validate the estimates.
Ranges of wind speed uncertainties in different categories. The
annotations correspond to those in Fig. 10, where each dark purple dot, dark green dot, and dark purple cross represents the mean estimated wind speed uncertainty, the mean observed wind speed uncertainty, and the maximum of estimated wind speed uncertainty from each independent study, respectively. The uncertainties are expressed as percentages of wind speed. The column of numbers on the right denotes the estimated and observed sample sizes in dark purple and dark green, respectively, in each category, and such sample size represents all the instances in that category that reported either the mean or the maximum uncertainty values. For clarity, the grey horizontal lines separate data from each category. Table B8 documents the wind speed uncertainties displayed.
Wind speed uncertainty greatly impacts AEP uncertainty, and the methods of
translating wind speed uncertainty into AEP uncertainty also differ between
organizations. For example, 1 % increase of wind speed uncertainty can
lead to either a 1.6 % (AWS Truepower, 2014) or 1.8 % increase in energy-production uncertainty (Holtslag, 2013; Johnson et al., 2008; White, 2008a, b). Local wind regimes can also affect this ratio. For low wind locations, AEP uncertainty can be 3 times the wind speed uncertainty, while such a ratio drops to 1.5 at high wind sites (Nielsen et al., 2010).
Decreasing wind speed uncertainty benefits the wind energy industry. Reduction in wind speed measurement of 0.28 % could reduce project-production uncertainty by about 0.15 % (Medley and Smith, 2019). Using a computational fluid dynamics model to simulate airflow around meteorological masts can reduce wind speed measurement uncertainty from 2.68 % to 2.23 %, which translates to GBP 1.2 million of equity savings for a 1 GW offshore wind farm in the United Kingdom (Crease, 2019).
Opportunities for improvements
Although the industry is reducing the mean P50 overprediction bias, the
remarkable uncertainties inherent in the WRA process overshadow such achievement. Different organizations have been improving their techniques over time to eliminate the P50 bias (Table 1), and as a whole we celebrate the technological advancements; nevertheless, challenges still exist for validation and reduction of the AEP losses and uncertainties. Even though the average P50 prediction bias is reducing and approaches zero, the associated mean P50 uncertainty remains at over 6 %, even for the studies reported after 2016 (Fig. 3b). For a validation study that involves a collection of wind farms, such an uncertainty bound implies that sizable P50 predication errors for particular wind projects can emerge. In other words, statistically, the AEP prediction is becoming more accurate and yet is imprecise. Moreover, from an industry-wide perspective that aggregates different analyses, the variability on the mean P50 bias estimates is notable, which obscures the overall bias-reducing trend (R2 below 0.5 in Fig. 3). Specifically, the magnitude of the 95 % prediction interval at over 10 % average P50 estimation error (Fig. 3b) suggests a considerable range of possible mean biases in future validation studies. Additionally, the uncertainties are still substantial in specific AEP losses (Fig. 9), AEP itself (Figs. 10 and 11), and wind speed (Fig. 12). Therefore, the quantification, validation, and reduction of uncertainties require the attention of the industry collectively.
To reduce the overall AEP uncertainty, the industry should continue to assess the energy impacts of plant performance losses, especially those from wake effect and environmental events. On one hand, wake effect, as part of a grand challenge in wind energy meteorology (Veers et al., 2019), has been estimated as one of the largest energy losses (Fig. 6e). The AEP loss caused by wake effect also varies, estimated between 15 % and 40 % (Fig. 9), and the unpredictability of wakes contributes to the AEP uncertainty on plant
performance (Fig. 10e) and the wind speed uncertainty (Fig. 12). Although the industry has been simulating and measuring energy loss caused by wake effect, its site-specific impact on AEP for the whole wind farm as well as its
time-varying production impact on downwind turbines remains largely uncertain. From a macro point of view, compared to internal wake effect, external wake effect from neighboring wind farms is a bigger known unknown
because of the lack of data and research. On the other hand, environmental
losses display broad range of values, particularly from icing events and
turbine degradation (Fig. 6c). In general, the icing problem halts energy production in the short run, and blade degradation undermines turbine performance in the long run. Diagnosing and mitigating such substantial environmental losses would reduce both loss and uncertainty on AEP. Overall, the prediction and prevention of environmental events are critical, and the production downtime during high electricity demand can lead to consequential financial losses.
Additionally, the industry recognizes the role of remote-sensing instruments
in reducing the uncertainty of energy production and wind speed from extrapolation, such as profiling lidars, scanning lidars, and airborne
drones (Faghani et al., 2008; Holtslag, 2013; Peyre, 2019; Rogers, 2010). The latter can also be used to inspect turbine blades (Shihavuddin et al., 2019) to reduce unexpected blade degradation loss over time. Industry-wide collaborations such as the International Energy Agency Wind Task 32 and the Consortium For Advancement of Remote Sensing, have been promoting remote-sensing implementation in WRA.
Leaders in the field have been introducing contemporary perspectives and
innovative techniques to improve the WRA process, including time-varying and
correlating losses and uncertainties. Instead of treating energy loss and
uncertainty as a static property, innovators have studied time-varying AEP
losses and uncertainties (Brower et al., 2012), especially when wind plants produce less energy with greater uncertainty in later operational years (Istchenko, 2015). Furthermore, different types of energy-production losses or uncertainties interact and correlate with each other, and dependent data sources can emerge in the WRA process. The resultant compound effect from two correlating sources of uncertainty can change the total uncertainty derived using a linear (Brower, 2011) or root-sum-square approach (Istchenko, 2015). For example, an icing event can block site access and decrease turbine availability and even lead to longer-term maintenance problems (Istchenko, 2015).
More observations and publicly available data are necessary to validate the
estimates listed in this article. In this article, the ratios between the
measured and predicted values are 1 to 1.9, 2.3, and 7.3, for energy loss,
energy uncertainty, and wind speed uncertainty, respectively. The small number of references on measured uncertainties indicate that we need more evidence to further evaluate our uncertainty estimates. Besides, challenges
exist in interpreting and harmonizing results from disparate reporting of
energy-production losses and uncertainties. Documentation aligned with ubiquitous reference frameworks will greatly strengthen the accuracy and
repeatability of future literature reviews. Therefore, data and method
transparency and standardization will continually improve insight into the
WRA process, increase the AEP estimation accuracy, and drive future innovation.
Conclusions
In this study, we compile and present the ranges and the trends of predicted P50 (i.e., median annual energy production) errors, as well as the estimated and observed energy losses, energy uncertainties, and wind speed uncertainties embedded in the wind resource assessment process. We conduct this literature review using over 150 credible sources from conference presentations to peer-reviewed journal articles.
Although the mean P50 bias demonstrates a decreasing trend over time because of continuous methodology adjustments, the notable uncertainty of the mean prediction error reveals the imprecise prediction of annual energy
production. The dominant effect of prediction uncertainty over the bias magnitude calls for further improvements on the prediction methodologies. To
reduce the mean bias, industry experts have made method adjustments in recent years that minimize the energy-production prediction bias, such as the applications of remote-sensing devices and the modeling advancements of meteorological phenomena.
We present the wind-energy-production losses and uncertainties in this
literature review according to the proposed framework by the IEC 61400-15 working group. Wake effect and
environmental events undermine wind plant performance and constitute the largest loss in energy production, and validating the wake and environmental
loss predictions requires more field measurements and detailed research.
Moreover, the variability of observed total availability loss is larger than its estimates. Meanwhile, the decreasing trends of measured curtailment loss
and observed generic power curve adjustment loss indicate the continuing
industry effort to optimize wind energy production. Additionally, different
categorical energy uncertainties and wind speed uncertainties demonstrate
similar magnitude, with a majority of the data below 10 %. More observations are the solution to better understand and further lower these
uncertainties.
In our findings, we highlight the potential future progress, including the
importance of accurately predicting and validating energy-production uncertainty, the impact of wake effect, and innovative approaches in the
wind resource assessment process. This work also includes a summary of the
data collected and used in this analysis. As the industry evolves with improved data sharing, method transparency, and rigorous research, we will
increasingly be able to maximize energy production and reduce its uncertainty for all project stakeholders.
Consensus energy-production loss framework for wind resource
assessment proposed by the International Electrotechnical Commission (IEC) 61400-15 working group (Filippelli et al., 2018). Note that
this table does not represent the final standards.
Loss categoryLoss subcategoryNotesWake effectInternal wake effectsWake effects internal to the wind plantExternal wake effectsWake effects generated externally to the wind plantFuture wake effectsWake effects that will impact future energy projections based oneither confirmed or predicted new project development ordecommissioningAvailabilityTurbine availabilityIncluding warranted availability, non-contractual availability,restart after grid outage, site access, downtime (or speed) to energyratio, first-year or plant start-up availabilityBalance-of-plantAvailability of substation and collection system, other non-turbineavailabilityavailability, warranted availability, site access, first-year or plantstart-up availabilityGrid availabilityGrid being outside the grid connection agreement operationalparameters, actual grid downtime, delays in restart after gridoutagesElectricalElectrical efficiencyElectrical losses between low- or medium-voltage side of thetransformer of wind turbine and the energy measurement pointFacility parasiticTurbine extreme weather packages, other turbine and/or plantconsumptionparasitic electrical losses (while operating or not operating)Turbine performanceSuboptimal performancePerformance deviations from the optimal wind plant performancecaused by software, instrumentation, and control setting issueGeneric power curveExpected deviation between advertised power curve and actualadjustmentpower performance in standard conditions (“inner range”)Site-specific power curveAccommodating for inclined flow, turbulence intensity, density,adjustmentshear, and other site- or project-specific adjustments (“outer range”)High wind hysteresisEnergy lost in hysteresis loop between high-wind-speed cut out andrecut inEnvironmentalIcingPerformance degradation and shutdown caused by icingDegradationBlade fouling, efficiency losses, and other environmentally drivenperformance degradationEnvironmental lossHigh- or low-temperature shutdown or derate, lightning, hail, andother environmental shutdownsExposureTree growth or logging, other building developmentLoad curtailmentSpeed and/or direction curtailments to mitigate loadsGrid curtailmentPower purchase agreement or offtaker curtailments, gridCurtailments (orlimitationsoperational strategies)Environmental or permitBirds, bats, marine mammals, flicker, noise (when not captured incurtailmentthe power curve)Operational strategiesAny periodic uprating, downrating, optimization, or shutdown notcaptured in the power curve or availability carve-outs
Consensus energy-production uncertainty framework for wind resource
assessment proposed by the IEC 61400-15 working group (Filippelli et al.,
2018). Note that this table does not represent the final standards.
UncertaintyUncertaintyNotescategorysubcategoryLong-term periodWhat is the statistical representativeness of the chosen historical and/or sitedata period? In other words, the interannual variability (coefficient ofvariation) of the historical reference data period in yearsReference dataHow accurate or reliable is the chosen reference data source? In otherwords, historical data consistency (e.g., are there possible underlying trendsin the data?)Historical windLong-term adjustmentWhat is the uncertainty associated with the prediction process? Statisticalresourceor empirical uncertainty in establishing a correlation or carrying out aprediction, which may be conditioned upon the correlation method and spanor the quantity of concurrent data periodWind speed and directionMean wind speed aside, how representative is the measured or predicteddistributiondistribution and wind rose or energy rose shape of the long term?On-site data synthesisUncertainty associated with gap-filling missing data periods. Usually doneusing directional correlations or the measure–correlate–predict process, andhence long-term and reference data categories may applyModeled operationalThe statistical uncertainty associated with how closely the wind resourceperiodover the modeled operational period (i.e., 1 or 10 years) may match the long-termProjectsite averageevaluationClimate changeWhen an impact of climate change can be assessed, then this may beperiodconsidered as an uncertaintyvariabilityPlant performanceThe statistical uncertainty associated with how closely the plantperformance over the modeled operational period (i.e., 1 or 10 years) maymatch the long-term site averageMeasurementWind speedIncluding effects for wind speed sensor characteristics (cup or sonic), windmeasurementspeed sensor mounting or deployment (cup or sonic), wind speed sensordata handling and processing characteristics (e.g., tower shadow, icing, anddegradation), system motion, consistency and exposure, data acquisition,and data handling. Additionally, the reduction in uncertainty caused bysensor combination is consideredData integrity andDocumentation, verification, and traceability of the datadocumentationWind directionSensor type or quality, operational characteristics, mounting effects,measurementalignment, acquisition, long-term representativenessFurther atmosphericAir temperature, pressure, relative humidity, and other atmosphericparametersparametersModel inputsTerrain surface characterization, wind data measurement heights, windVerticalstatistics or shear, measurement uncertaintyextrapolationModel componentsRepresentativeness per height or terrain, profile fitModel stressLarge extrapolation distance, complex terrain (measurement height relativeto terrain complexity)
Continued.
UncertaintyUncertaintyNotescategorysubcategoryModel inputsFidelity and appropriateness, given sensitivity of model to terrain data,roughness, forestry information, atmospheric conditionsModel stressRepresentativeness of initiation points relative to turbine locations in termsof complicating factors (e.g., forestry, stability, steep slopes, distance,Horizontalelevation, veer); the intensity of and sensitivity to complicating factorsextrapolationModel appropriatenessPhysical scientific plausibility of model to capture complicating factors;validation of implementation of model: published validation of specificimplementation and relevance to complicating factors present on site; on-site model verification: site to site (untuned, blind); consider the quality ofany shear verificationWake effectRefer to Table A1AvailabilityPlantElectricalperformanceTurbine performanceEnvironmentalCurtailments oroperational strategies
For the P50 prediction error, Figs. 3 and 4 use the data from Table B1 and Table B2, respectively. For the various categories and subcategories of losses, Figs. 5, 6, 8, and C1 portray the values in Table B3. Figure 7
illustrates the losses outside of the IEC-proposed framework listed in
Table B4. Figure 9 summarizes the uncertainty of production loss percentages in Table B5. Figures 10 and 11 represent the AEP uncertainty data included in
Tables B6 and B7, respectively. Figure 12 displays the wind speed uncertainty data in Table B8.
List of P50 biases in the literature, which is necessary to
generate Fig. 3. The “Wind farm” column denotes the number of wind farms reported in the reference, and the “Wind farm year” column indicates the total number of operation years among the wind farms in that study. The “Bias (%)” column represents the average P50 bias, where a negative number indicates an overestimation of actual energy production. All the values in the “Uncertainty (%)” column illustrate 1 standard deviation from the mean.
YearWindWindBiasUncertaintyNotesSourcefarmfarm(%)(%)year200212-16Mönnich et al. (2016)200310-11Mönnich et al. (2016)200419-12Mönnich et al. (2016)200537-8Mönnich et al. (2016)2006-13Johnson et al. (2008)200621-10Mönnich et al. (2016)200723-5Mönnich et al. (2016)200859243-11Johnson et al. (2008), Jones (2008)200841113-4Johnson et al. (2008)200856112-10White (2009)20083662-2.1Johnson (2012)2008-10Industry averageWhite (2009)200817-10Mönnich et al. (2016)2009255-1Horn (2009)2009-9Hendrickson (2009)200943-3Hendrickson (2009)200910.56.4Comparison of four analystsDerrick (2009)20091145-2.27.3White (2009)200918-3Mönnich et al. (2016)2010-18.1From 1806 wind turbinesNielsen et al. (2010)201011-10Mönnich et al. (2016)201112.4Comparison of 15 analystsHendrickson (2011)201189-6Industry average from 2000 to 2011Drunsic (2012)2011-2Drunsic (2012)201118-7Mönnich et al. (2016)2011-6.70.8Lunacek et al. (2018)2012-5Industry average from 2005 to 2011Drunsic (2012)2012-1Drunsic (2012)2012-1Brower et al. (2012)20121253820Johnson (2012)2012-2.4Bernadett et al. (2012)201211-7Mönnich et al. (2016)20126-4.9Pullinger et al. (2019)201314-1Mönnich et al. (2016)201424106-18.8Brower (2014)201431101-1.4Istchenko (2014)2014-0.6Geer (2014)20149-15Redouane (2014)20144-2Mönnich et al. (2016)2015-1.9Istchenko (2015)20151004Sieg (2015)20151-43Comparison of 20 analystsMortensen et al. (2015a, b)201511Mönnich et al. (2016)20152591-8Cox (2015)201530127-2.2Stoelinga and Hendrickson (2015)20151858-1.6Hendrickson (2019)201523-4.77.7Hatlee (2015)2016301270.18.8Baughman (2016)2017140-2Projects from 2011 to 2016Elkinton (2017), Hale (2017)201761-1.67.6Most projects from 2008 to 2012Brower (2017), Hale (2017)2017-2.5Hale (2017)2017301270.78.8Perry (2017)
Continued.
YearWindWindBiasUncertaintyNotesSourcefarmfarm(%)(%)year201856294-5.51.3Lunacek et al. (2018)2018500Hendrickson (2019)2018-1.57.6Hendrickson (2019)20186-1.4Pullinger et al. (2019)201931212-1.24.7Crescenti et al. (2019)201930144011.37Hendrickson (2019)201930111-0.14.5Hendrickson (2019)201907.3Hendrickson (2019)201987570-3.1Papadopoulos (2019)201925146-5Papadopoulos (2019)20191159-0.4Papadopoulos (2019)20191124-3.9Papadopoulos (2019)
List of P50 bias groups for Fig. 4, expanding from Table B1. Different groups (the “Group” column) are represented by different line colors in Fig. 4.
GroupYearWindWindBiasUncertaintyNotesSourcefarmfarm(%)(%)year12006-13Johnson et al. (2008), Jones (2008)1200859243-11Johnson et al. (2008), Jones (2008)2200841113-10Johnson et al. (2008)2200841113-4Adjust for windiness and availabilityJohnson et al. (2008)2200943-3Hendrickson (2009)32008-10Industry averageWhite (2009)32011476-9Industry averageDrunsic (2012)3201189-6Industry average from 2000 to 2011Drunsic (2012)32012-5Industry average from 2005 to 2011Drunsic (2012)42009-10Hendrickson (2009)42009-9Exclude Texas projectsHendrickson (2009)520091145-2.27.3White (2009)520091145-3.57Accounting for windinessWhite (2009)62010-8Projects from 2000 to 2010Ostridge (2017)6201750-3Projects from 2011 to 2016Elkinton (2017), Hale (2017)62017140-2Adjusted for curtailment and windiness, and so on.Elkinton (2017), Hale (2017)62018500Hendrickson (2019)72010294-9.9Projects before 2011Lunacek et al. (2018)7201056-9.2Projects before 2011Lunacek et al. (2018)72010-6.70.8Projects before 2011, long-term correction, R2 filteredLunacek et al. (2018)82011-2Projects from 2000 to 2011Drunsic (2012)82012-1Projects from 2005 to 2011Drunsic (2012)92012125382-9Johnson (2012)920121253820Johnson (2012)10201224106-3.61.4Bernadett et al. (2012)102012-2.4Bernadett et al. (2012)11201431101-2.81 yearIstchenko (2014)11201431101-1.410 yearsIstchenko (2014)12201424106-1.17.5Brower (2014)12201424106-18.8Correct for windinessBrower (2014)1320152591-8Cox (2015)1320152591-9Correct for windinessCox (2015)14201530127-2.2Adjust for windiness and availabilityStoelinga and Hendrickson (2015)142016301270.18.8Baughman (2016)1520151858-1.64.4Hendrickson (2019)15201930111-0.14.5Hendrickson (2019)16201865-6.6Projects after 2011Lunacek et al. (2018)16201823-6.4Projects after 2011Lunacek et al. (2018)162018-5.51.28Long-term correction, R2 filteredLunacek et al. (2018)172018-1.57.6Hendrickson (2019)17201907.3Hendrickson (2019)
List of energy losses, corresponding to Figs. 6 and 8. The “e” and “o” in the “Est/obs” column represent estimated and observed values,
respectively. The energy loss categories and subcategories align with those
in Table A1. “Avg (%),” “Min (%),” and “Max (%) indicate the average, minimum, and maximum energy loss percentages, respectively. The same column-name abbreviations apply to the following tables in Appendix B.
YearEst/CategorySubcategoryAvgMinMaxNotesSourceobs(%)(%)(%)2010eAvailabilityBalance of12Clive (2010)plant2013eAvailabilityBalance of1Typical northwestMortensen (2013)plantEuropean onshore2014eAvailabilityBalance of0.20.20.4Typical NorthAWS TruepowerplantAmerican onshore,(2014)collection, andsubstation2016eAvailabilityBalance of0.5SubstationClifton et al. (2016)plant2017eAvailabilityBalance of0.30.5Onshore: 0.5;Papadopoulos (2019)plantOffshore: 0.32011oAvailabilityBalance of0.2Johnson (2011)plant2010eAvailabilityGrid213WindPro 2.7Nielsen et al. (2010)2013eAvailabilityGrid1Typical northwestMortensen (2013)European onshore2014eAvailabilityGrid0.30.30.6Typical NorthAWS TruepowerAmerican onshore,(2014)utility grid2016eAvailabilityGrid1TransmissionClifton et al. (2016)2019eAvailabilityGrid13.3Hill et al. (2019)availability2008oAvailabilityGrid0.72.5Spengemann andBorget (2008)2008eAvailabilityTotal3Outside NorthGraves et al. (2008)availabilityAmerica2008eAvailabilityTotal35Include first-yearJohnson et al.availabilityoperation, also(2008), White (2008a)stated in Table B42009eAvailabilityTotal323Randall (2009)availability2009eAvailabilityTotal35United States:Horn (2009)availabilitysouthern states: 3;northern states: 52011eAvailabilityTotal5AnalystHendrickson (2011)availabilitycomparison2012eAvailabilityTotal3Drunsic (2012)availability2012eAvailabilityTotal6210Brower (2012)availability
Continued.
YearEst/CategorySubcategoryAvgMinMaxNotesSourceobs(%)(%)(%)2013eAvailabilityTotal3.2Onshore, analystMortensen andavailabilitycomparisonEjsing Jørgensen(2013)2014eAvailabilityTotal6.2Typical NorthAWS TruepoweravailabilityAmerican onshore(2014)2016eAvailabilityTotal25For plants built inClifton et al. (2016)availability2010 to 20152016eAvailabilityTotal4.2Beaucage et al.availability(2016)2016eAvailabilityTotal24Bernadett et al.availability(2016)2018eAvailabilityTotal2OnshoreStehly et al. (2018)availability2007oAvailabilityTotal7.4Johnson (2011)availability2008oAvailabilityTotal4.5North AmericaGraves et al. (2008)availability2008oAvailabilityTotal5Johnson et al.availability(2008), White (2008a)2008oAvailabilityTotal7Johnson et al.availability(2008), Jones (2008)2008oAvailabilityTotal6.7Johnson (2011)availability2008oAvailabilityTotal6Lackner et al. (2008)availability2009oAvailabilityTotal availability56Hendrickson (2009)2009oAvailabilityTotal availability6.5Randall (2009)2009oAvailabilityTotal8.2Most available inCushman (2009)availabilitysummer and fall,least in winter2009oAvailabilityTotal6.9Johnson (2011)availability2010oAvailabilityTotal3.5Johnson (2011)availability2010oAvailabilityTotal1.1111WindPro 2.7Nielsen et al. (2010)availability2011oAvailabilityTotal11Conroy et al. (2011)availability2011oAvailabilityTotal2.6Johnson (2011)availability2012oAvailabilityTotal6Drunsic (2012)availability2012oAvailabilityTotal6.4Higher availabilityWinslow (2012)availabilityloss for higherwind speeds
Continued.
YearEst/CategorySubcategoryAvgMinMaxNotesSourceobs(%)(%)(%)2015oAvailabilityTotal5Operational issuesCox (2015)availability(e.g., cables,connection,turbine)2016oAvailabilityTotal4.5Beaucage et al.availability(2016)2016oAvailabilityTotal3.2Bernadett et al.availability(2016)2019oAvailabilityTotal4Pedersen andavailabilityLangreder (2019)2010eAvailabilityTurbine25Clive (2010)2010eAvailabilityTurbine25WindPro 2.7Nielsen et al. (2010)2013eAvailabilityTurbine3Typical northwestMortensen (2013)European onshore2014eAvailabilityTurbine5.9310.1Typical NorthAWS TruepowerAmerican onshore,(2014)combined fromcontractual turbine,non-contractualturbine, correlation,restart, site access2011oAvailabilityTurbine2.3Johnson (2011)2019oAvailabilityTurbine1.67Combine scheduledPedersen andand unscheduledLangreder (2019)maintenance2014eCurtailmentGrid03.5Typical NorthAWS TruepowerAmerican onshore,(2014)including powerpurchaseagreement2016eCurtailmentGrid1Clifton et al. (2016)2019eCurtailmentGrid3.8Ireland estimate,Papadopoulosbased on(2019)operational data2016oCurtailmentGrid0.51Interconnection capOstridge andRodney (2016)2014eCurtailmentLoad03.5Typical NorthAWS TruepowerAmerican onshore,(2014)directional2019oCurtailmentLoad1.02Load shutdownPedersen andLangreder (2019)2014eCurtailmentPermit03.5Typical NorthAWS TruepowerAmerican onshore(2014)2016eCurtailmentPermit1Clifton et al. (2016)2018eCurtailmentPermit0.050.2Shadow flickerMibus (2018)
Continued.
YearEst/CategorySubcategoryAvgMinMaxNotesSourceobs(%)(%)(%)2016oCurtailmentPermit0.42.4BatOstridge andRodney (2016)2019oCurtailmentPermit0.670.71Bat and shadowPedersen andflickerLangreder (2019)2011eCurtailmentTotal0AnalystHendrickson (2011)curtailmentcomparison2012eCurtailmentTotal005Brower (2012)curtailment2014eCurtailmentTotal0Typical NorthAWS TruepowercurtailmentAmerican onshore(2014)2016eCurtailmentTotal14Clifton et al. (2016)curtailment2011oCurtailmentTotal4Johnson (2011)curtailment2012oCurtailmentTotal2.97Wiser et al. (2019)curtailment2013oCurtailmentTotal2.86Wiser et al. (2019)curtailment2014oCurtailmentTotal14VariesBird et al. (2014)curtailmentgeographically2014oCurtailmentTotal2.31Wiser et al. (2019)curtailment2015oCurtailmentTotal2.15Wiser et al. (2019)curtailment2016oCurtailmentTotal2.1Wiser et al. (2019)curtailment2017oCurtailmentTotal2.54Wiser et al. (2019)curtailment2018oCurtailmentTotal2.18Wiser et al. (2019)curtailment2014eElectricalElectrical213Typical NorthAWS TruepowerefficiencyAmerican onshore(2014)2016eElectricalElectrical12Collector systemClifton et al. (2016)efficiency2014eElectricalFacility0.100.1Typical NorthAWS TruepowerparasiticAmerican onshore,(2014)consumptionweather package2010eElectricalTotal23Clive (2010)electrical2011eElectricalTotal3AnalystHendrickson (2011)electricalcomparison2012eElectricalTotal2.123Brower (2012)electrical
Continued.
YearEst/CategorySubcategoryAvgMinMaxNotesSourceobs(%)(%)(%)2013eElectricalTotal1.2Typical northwestMortensen (2013)electricalEuropean onshore2013eElectricalTotal12Typical northwestMortensen (2013)electricalEuropean onshore2014eElectricalTotal0.72Colmenar-Santoselectricalet al. (2014)2014eElectricalTotal2.1Typical NorthAWS TruepowerelectricalAmerican onshore(2014)2016eElectricalTotal23.5Clifton et al. (2016)electrical2008oElectricalTotal3Spengemann andelectricalBorget (2008)2006eEnvironmentalDegradation13Spruce and Turner(2006)2009eEnvironmentalDegradation0.20.10.410 yearsRandall (2009)2009eEnvironmentalDegradation1.20.51.920 yearsRandall (2009)2010eEnvironmentalDegradation510Standish et al.(2010)2011eEnvironmentalDegradation0.3Bernadett et al.(2012)2012eEnvironmentalDegradation0.6Bernadett et al.(2012)2014eEnvironmentalDegradation525Wind tunnel studySareen et al. (2014)2014eEnvironmentalDegradation10.61.3Typical NorthAWS TruepowerAmerican onshore(2014)2014eEnvironmentalDegradation520Extreme casesRedouane (2014)2015eEnvironmentalDegradation5Langel et al. (2015)2016eEnvironmentalDegradation12Industry standard;Clifton et al. (2016)soiling and erosion2016eEnvironmentalDegradation5Maniaci et al.(2016)2017eEnvironmentalDegradation0.42.3Ehrmann et al.(2017)2017eEnvironmentalDegradation8Schramm et al.(2017)2017eEnvironmentalDegradation4.96.8Wilcox et al. (2017)2019eEnvironmentalDegradation3.6Normal operationHasager et al.(2019)2019eEnvironmentalDegradation2.6Erosion safe modeHasager et al.operation(2019)2014oEnvironmentalDegradation1.41.8United KingdomStaffell and Green(2014)
Continued.
YearEst/CategorySubcategoryAvgMinMaxNotesSourceobs(%)(%)(%)2016oEnvironmentalDegradation1.52Before blade repairMurphy (2016)2017oEnvironmentalDegradation0.3SwedenOlauson et al.(2017)2018oEnvironmentalDegradation0.44Wiser et al. (2019)2019oEnvironmentalDegradation0.6GermanyGermer andKleidon (2019)2019oEnvironmentalDegradation9.5Lead edge erosionLatoufis et al.(2019)2020oEnvironmentalDegradation0.171.23United StatesHamilton et al.(2020)2014eEnvironmentalEnvironmental0.603.9Typical NorthAWS TruepowerAmerican onshore,(2014)combiningtemperatureshutdown andlightning2016eEnvironmentalEnvironmental1TemperatureClifton et al. (2016)shutdown2019oEnvironmentalEnvironmental0.35TemperaturePedersen andshutdownLangreder (2019)2016eEnvironmentalExposure03Exposure over timeClifton et al. (2016)2014eEnvironmentalIcing104.5Typical NorthAWS TruepowerAmerican onshore(2014)2016eEnvironmentalIcing15Clifton et al. (2016)2016eEnvironmentalIcing5.6Beaucage et al.(2016)2019eEnvironmentalIcing30Abascal et al. (2019)2008oEnvironmentalIcing26Average of twoGillenwater et al.wind farms for(2008)4 years2010oEnvironmentalIcing24Four winters, 10 %Rindeskär (2010)of the year2015oEnvironmentalIcing10Seven wind farms,Byrkjedal et al.111 turbines,(2015)272 MW in Sweden2016oEnvironmentalIcing515Three consultantsTrudel (2016)underestimate 1.5to 4 times lowerthan this2016oEnvironmentalIcing4.9Beaucage et al.(2016)2019oEnvironmentalIcing0.87Pedersen andLangreder (2019)2019oEnvironmentalIcing3335Abascal et al. (2019)
Continued.
YearEst/CategorySubcategoryAvgMinMaxNotesSourceobs(%)(%)(%)2011eEnvironmentalTotal2AnalystHendrickson (2011)environmentalcomparison2012eEnvironmentalTotal2.616Brower (2012)environmental2013eEnvironmentalTotal12Typical, used inMortensen (2013)environmentalWind AtlasAnalysis andApplicationProgram (WAsP),include bladedegradation, icing,temp shutdown2013eEnvironmentalTotal12Typical northwestMortensen (2013)environmentalEuropean onshore,include bladedegradation andicing2014eEnvironmentalTotal2.7Typical NorthAWS TruepowerenvironmentalAmerican onshore(2014)2016eEnvironmentalTotal17Clifton et al. (2016)environmental2011oEnvironmentalTotal0.4Johnson (2011)environmental2010eTotalTotal613Clive (2010)2011eTotalTotal18AnalystHendrickson (2011)comparison2012eTotalTotal18.57.837Brower (2012)2012eTotalTotal14.8AnalystMortensen et al.comparison(2012)2013eTotalTotal22.5Offshore, analystMortensen andcomparisonEjsing Jørgensen(2013)2013eTotalTotal17.4Onshore, analystMortensen andcomparisonEjsing Jørgensen(2013)2014eTotalTotal19.78.532.2Typical NorthAWS TruepowerAmerican onshore(2014)2018eTotalTotal15OnshoreStehly et al. (2018)2008oTotalTotal25Johnson et al.(2008)2008eTurbineGeneric1Johnson et al.performancepower curve(2008)adjustment2009eTurbineGeneric0.3Turbulence-AWS Truepowerperformancepower curveintensity-dependent(2009)adjustmentpower curves
Continued.
YearEst/CategorySubcategoryAvgMinMaxNotesSourceobs(%)(%)(%)2012eTurbineGeneric2.414Brower et al. (2012)performancepower curveadjustment2014eTurbineGeneric2.402.4Typical NorthAWS Truepowerperformancepower curveAmerican onshore(2014)adjustment2016eTurbineGeneric2.4Bernadett et al.performancepower curve(2016)adjustment2019eTurbineGeneric1Lee (2019)performancepower curveadjustment2008oTurbineGeneric24Johnson et al.performancepower curve(2008), Jones (2008)adjustment2012oTurbineGeneric2.23.2Drees and Weissperformancepower curve(2012)adjustment2012oTurbineGeneric2.5Johnson (2012)performancepower curveadjustment2013oTurbineGeneric1.8Without yaw errorOsler (2013)performancepower curvecorrectionadjustment2014oTurbineGeneric2Staffell and Greenperformancepower curve(2014)adjustment2014oTurbineGeneric1.613Ostridge (2014)performancepower curveadjustment2015oTurbineGeneric204Geer (2015)performancepower curveadjustment2015oTurbineGeneric1.5Ostridge (2015)performancepower curveadjustment2015oTurbineGeneric1.1Kassebaum (2015)performancepower curveadjustment2018oTurbineGeneric0.2Pram (2018)performancepower curveadjustment2010eTurbineHigh wind0.3WindPro 2.7Nielsen et al. (2010)performancehysteresis2014eTurbineHigh wind0.603Typical NorthAWS TruepowerperformancehysteresisAmerican onshore(2014)
Continued.
YearEst/CategorySubcategoryAvgMinMaxNotesSourceobs(%)(%)(%)2009eTurbineSite-specific0.6Adjust for towerAWS Truepowerperformancepower curveturbulence intensity(2009)adjustmentto correct NRGSystems Max 40anemometeroverspeeding2014eTurbineSite-specific001Typical NorthAWS Truepowerperformancepower curveAmerican onshore,(2014)adjustmentincluding inclinedflow2016eTurbineSite-specific0.5Papadopoulosperformancepower curve(2019)adjustment2014oTurbineSite-specific25Staffell and Greenperformancepower curve(2014)adjustment2008eTurbineSuboptimal1Johnson et al.performanceperformance(2008), White (2008a)2009eTurbineSuboptimal12White (2009)performanceperformance2009eTurbineSuboptimal1AWS Truepowerperformanceperformance(2009)2013eTurbineSuboptimal0.5Papadopoulosperformanceperformance(2019)2014eTurbineSuboptimal101Typical NorthAWS TruepowerperformanceperformanceAmerican onshore(2014)2019eTurbineSuboptimal1.12.210∘ of yaw errorLiew et al. (2019)performanceperformance2019eTurbineSuboptimal3Yaw misalignmentSlinger et al.performanceperformance(2019b)2012oTurbineSuboptimal03.6Johnson (2012)performanceperformance2019oTurbineSuboptimal0.41Pedersen andperformanceperformanceLangreder (2019)2019oTurbineSuboptimal0.21YawPedersen andperformanceperformanceLangreder (2019)2010eTurbineTotal turbine13Clive (2010)performanceperformance2010eTurbineTotal turbine1019Clive (2010)performanceperformance2011eTurbineTotal turbine2AnalystHendrickson (2011)performanceperformancecomparison2012eTurbineTotal turbine2.505Brower (2012)performanceperformance2013eTurbineTotal turbine12Typical northwestMortensen (2013)performanceperformanceEuropean onshore
YearEst/CategorySubcategoryAvgMinMaxNotesSourceobs(%)(%)(%)2010eWake effectTotal wake515WindPro 2.7Nielsen et al. (2010)effect2010eWake effectTotal wake11.5Account for deep-Nielsen et al. (2010)effectarray loss andturbulence intensity2011eWake effectTotal wake13Comstock (2011)effect2011eWake effectTotal wake8610AnalystHendrickson (2011)effectcomparison2012eWake effectTotal wake6.7315Brower (2012)effect2012eWake effectTotal wake6.14.58.1AnalystMortensen et al.effectcomparison(2012)2013eWake effectTotal wake146.937Offshore, analystMortensen andeffectcomparisonEjsing Jørgensen(2013)2013eWake effectTotal wake103.917Onshore, analystMortensen andeffectcomparisonEjsing Jørgensen(2013)2014eWake effectTotal wake6.41.118.1Typical NorthAWS TruepowereffectAmerican onshore(2014)2015eWake effectTotal wake6.114.3Onshore analystMortensen et al.effectcomparison(2015b)2016eWake effectTotal wake010Onshore analystClifton et al. (2016)effectcomparison2018eWake effectTotal wake4.57.7Walls (2018)effect2019eWake effectTotal wake15Slinger et al.effect(2019a)2019eWake effectTotal wake314Stoelinga (2019)effect2010oWake effectTotal wake13By the fifth rowWolfe (2010)effect2014oWake effectTotal wake515Onshore, smallStaffell and Greeneffect(20-turbine) wind(2014)farms2016oWake effectTotal wake8.415.3Up to fourth rowKline (2016)effectdownwind2019oWake effectTotal wake416Stoelinga (2019)effect
List of other categorical losses outside the IEC-proposed framework
(Table A1), which are used to generate Fig. 7.
YearEst/CategorySubcategoryAvgMinMaxNotesSourceobs(%)(%)(%)2008eAvailabilityFirst few35Include first-yearJohnson et al.years ofoperation; also stated(2008), White (2008a)operationin Table B32014eAvailabilityFirst few426Typical NorthAWS Truepoweryears ofAmerican onshore,(2014)operationfirst year2010oAvailabilityFirst few45First year ofJohnson (2011)years ofoperationoperation2011oAvailabilityFirst few23First year ofJohnson (2011)years ofoperationoperation2019oAvailabilityFirst few2.2First 2 years ofPullinger et al. (2019)years ofoperationoperation2018eTurbineBlockage1Bleeg (2018)performance2019eTurbineBlockage0.31.5Spalding (2019)performance2019eTurbineBlockage1.75Robinson (2019)performance2019eTurbineBlockage1.906Lee (2019)performance2019eTurbineBlockage215Papadopoulos (2019)performance
List of uncertainties of energy losses, as projected in Fig. 9. Note
that a value herein represents the percent of energy percentage loss.
YearEst/CategoryAvgMinMaxNotesSourceobs(%)(%)(%)2014oInterannual3.3Istchenko (2014)variability of loss2014oIntermonthly1014Istchenko (2014)variability of loss2012eNonwake loss32Analyst comparisonMortensen et al. (2012)2013eNonwake loss7.8Offshore, analystMortensen and EjsingcomparisonJørgensen (2013)2013eNonwake loss34Onshore, analystMortensen and EjsingcomparisonJørgensen (2013)2012eWake loss13Analyst comparisonMortensen et al. (2012)2013eWake loss1020Caused by different modelsBrower and Robinsonand terrains(2013)2013eWake loss2030In WindFarmerElkinton (2013)2013eWake loss25McCaa (2013)2013eWake loss1520Kline (2013)2013eWake loss30Halberg and Breakey (2013)2013eWake loss37Offshore, analystMortensen and EjsingcomparisonJørgensen (2013)2013eWake loss18Onshore, analystMortensen and EjsingcomparisonJørgensen (2013)2014eWake loss20AWS Truepower (2014)2015eWake loss1322Mortensen et al. (2015b)2016eWake loss1335Clifton et al. (2016)2019eWake loss18Stoelinga (2019)2009oWake loss80By second row of anDahlberg (2009)offshore wind farm
List of energy uncertainties, according to the categories and
subcategories in Table A2. These values correspond to Fig. 10.
YearEst/CategorySubcategoryAvgMinMaxNotesSourceobs(%)(%)(%)2004eHistoricalLong-term5WindPro 2.4;EMD Internationalwindadjustmentmethods and measure–A/S (2004)resourcecorrelate–predict2008eHistoricalLong-term510Measure–correlate–Anderson (2008)windadjustmentpredict processresource2010eHistoricalLong-term310WindPro 2.7; long-Nielsen et al. (2010)windadjustmentterm correctionresource2013eHistoricalLong-term4011Onshore, analystMortensen and EjsingwindadjustmentcomparisonJørgensen (2013)resource1991eHistoricalLong-term10Simon (1991)windperiodresource2004eHistoricalLong-term5WindPro 2.4; windEMD InternationalwindperiodstatisticsA/S (2004)resource2008eHistoricalLong-term5Climate variation:Johnson et al. (2008),windperiod1997–2007White (2008a)resource2010eHistoricalLong-term5WindPro 2.7; long-Nielsen et al. (2010)windperiodterm wind variabilityresource2012eHistoricalLong-term5.9Long-term windTchou (2012)windperiodspeedresource2013eHistoricalLong-term3.5012Onshore, analystMortensen and EjsingwindperiodcomparisonJørgensen (2013)resource2014eHistoricalLong-term211Long-term windGeer (2014)windperiodspeed and itsresourceinterannual variability2014eHistoricalLong-term3.22.14.8AWS Truepowerwindperiod(2014)resource2015eHistoricalLong-term5.59.5Breakey (2019)windperiodresource2019eHistoricalLong-term28.41-year uncertaintyDutrieux (2019)windperiodresource2010oHistoricalLong-term2Rogers (2010)windperiodresource2012oHistoricalLong-term8.2Long-term windTchou (2012)windperiodspeedresource
List of other energy uncertainties outside of the IEC-proposed
framework (Table A2), and the values herein are necessary to generate
Fig. 11.
YearEst/CategoryAvgMinMaxNotesSourceobs(%)(%)(%)2013eExternal wake1.6Offshore, analyst comparisonMortensen and EjsingJørgensen (2013)2013eMethodology5Energy calculationHoltslag (2013)2018eMethodology13Analyst uncertaintyCraig et al. (2018)2014ePower curve410Redouane (2014)measurement2002oPower curve68Friis Pedersen et al.measurement(2002)2013oPower curve3.5Power curve testKassebaum (2013)measurement2015oPower curve4.5Kassebaum (2015)measurement
List of wind speed uncertainties which are used for Fig. 12. Different
from other tables in Appendix B, this table records values in percentage of
wind speed.
YearEst/CategoryAvgMinMaxNotesSourceobs(%)(%)(%)2018eBlockage1.93.4Bleeg et al. (2018)2011eDistortion02Non-ideal flow; includesHatlee (2011)tower, boom, otherequipment2014eDistortion1.13.6Include distortion of terrainRedouane (2014)and mounting2010eFuture variability13Future climate; WindPro 2.7Nielsen et al. (2010)2011eFuture variability46Comstock (2011)2012eFuture variability1.42.2Future wind resourceBrower (2012)2011eHorizontal14Comstock (2011)extrapolation2013eHorizontal5Reference dataHoltslag (2013)extrapolation2013eHorizontal1LidarHoltslag (2013)extrapolation2013eHorizontal05Mortensen (2013)extrapolation2015eHorizontal02.2Long-term extrapolationMortensen et al. (2015a)extrapolation2010oHorizontal1.9Analyst comparison; “extrapolation”Walter (2010)extrapolation1991eInterannual6.1Simon (1991)variability2006eInterannual812Northern EuropePryor et al. (2006)variability2008eInterannual27WindinessJohnson et al. (2008)variability2009eInterannual6Recommend inGarrad Hassan and PartnersvariabilityWindFarmerLtd (2009)2010eInterannual3.5Hendrickson (2010)variability2010eInterannual61-year uncertainty;Nielsen et al. (2010)variabilityWindPro 2.72010eInterannual1.320-year uncertainty;Nielsen et al. (2010)variabilityWindPro 2.72011eInterannual46United StatesRogers (2011)variability2013eInterannual26VariabilityMortensen (2013)variability2014eInterannual24Brower (2014)variability2014eInterannual3.56Geer (2014)variability
Continued.
YearEst/CategoryAvgMinMaxNotesSourceobs(%)(%)(%)2017eInterannual5Perry (2017)variability2018eInterannual2.137 years in contiguousLee et al. (2018)variabilityUnited States2019eInterannual1.45.4Gkarakis and Orfanakivariability(2019)2014oInterannual5.78.8Istchenko (2014)variability2018eIntermonthly10.237 years in contiguousLee et al. (2018)variabilityUnited States2014oIntermonthly1924Istchenko (2014)variability2010eLong-term wind324Clive (2010)speed2011eLong-term wind3.74.8Combine nearby weatherRogers (2011)speedstation, airport, modeleddata2011eLong-term wind1.54Comstock (2011)speed2012eLong-term wind12Brown (2012)speed2012eLong-term wind1.64Brower (2012)speed2013eLong-term wind2Reference data; long-termHoltslag (2013)speedrepresentation2014eLong-term wind011Uncertainty is smaller withHamel (2014)speedlonger years2014eLong-term wind15Hendrickson (2014)speed2014eLong-term wind1.16.1From data analysis andRedouane (2014)speedmeasure–correlate–predict2006oLong-term wind3.5201000 h of dataRogers et al. (2006)speed2006oLong-term wind369000 h of data atRogers (2011)speedoffshore wind farms2006oLong-term wind289000 h of data atRogers (2011)speedoffshore wind farms2010eMeasure–correlate–13WindPro 2.7Nielsen et al. (2010)predict2012eMeasure–correlate–2.513Long-term wind speed andMortensen et al. (2012)predictcorrection2013eMeasure–correlate–4Lidar; long-termHoltslag (2013)predictrepresentation andcorrelation2014eMeasure–correlate–0.76.4Redouane (2014)predict
Continued.
YearEst/CategoryAvgMinMaxNotesSourceobs(%)(%)(%)2010ePlant performance314Energy loss modelClive (2010)2010eTerrain data and34Clive (2010)resolution2012eTerrain data and1.5Brown (2012)resolution2010eTotal wind speed7310Clive (2010)2012eTotal wind speed313Brower (2012)2013eTotal wind speed8.9Reference dataHoltslag (2013)2013eTotal wind speed5.1LidarHoltslag (2013)2015eTotal wind speed310Brower et al. (2015)2014oTotal wind speed916Nine locationsRedouane (2014)2011eVertical13Comstock (2011)extrapolation2011eVertical04Faghani (2011)extrapolation2012eVertical06.3Brower (2012)extrapolation2013eVertical5Reference dataHoltslag (2013)extrapolation2013eVertical0LidarHoltslag (2013)extrapolation2013eVertical05Mortensen (2013)extrapolation2014eVertical02Redouane (2014)extrapolation2015eVertical0.73.6Mortensen et al. (2015a)extrapolation2016eVertical26Non-forestedKelly (2016)extrapolation2017eVertical1Industry accepted;Langreder (2017)extrapolation1 % per 10 m2019eVertical07Depends on shear andKelly et al. (2019)extrapolationterrain2010oVertical1.9Analyst comparison;Walter (2010)extrapolation“extrapolation”2019oVertical04Depends on shear andKelly et al. (2019)extrapolationterrain2012eWake effect2Brown (2012)2014eWake effect16Actuator disk andAbiven et al. (2014)computational fluiddynamics models2014eWake effect0Park and Ainslie modelsAbiven et al. (2014)
Continued.
YearEst/CategoryAvgMinMaxNotesSourceobs(%)(%)(%)2007eWind speed2.4Breakey (2019)measurement2010eWind speed314Clive (2010)measurement2010eWind speed2WindPro 2.7Nielsen et al. (2010)measurement2011eWind speed12.5Ideal flow; calibrationHatlee (2011)measurement2011eWind speed1.55Non-ideal flow; totalHatlee (2011)measurementmeasurement2011eWind speed3.1Rogers (2011)measurement2011eWind speed1.53.5Comstock (2011)measurement2011eWind speed23Faghani (2011)measurement2012eWind speed0.51.5Brown (2012)measurement2012eWind speed12.5Single anemometerBrower (2012)measurement2013eWind speed5Reference data; windHoltslag (2013)measurementstatistics2013eWind speed3Lidar; wind statisticsHoltslag (2013)measurement2013eWind speed25Wind measurementMortensen (2013)measurement2014eWind speed05Measurement campaignRedouane (2014)measurement2015eWind speed2Anemometer andGeer (2015)measurementcalibration2015eWind speed2Two met mastsBrower et al. (2015)measurement2016eWind speed2Kelly (2016)measurement2017eWind speed0.8Breakey (2019)measurement2019eWind speed1.581.541.86Range of standard,Medley and Smith (2019)measurementrecommended, and lidarmethods2019eWind speed4Lidar calibrationSlater (2019)measurement2019eWind speed2.232.68Range from usingCrease (2019)measurementcomputational fluiddynamics models or not
Continued.
YearEst/CategoryAvgMinMaxNotesSourceobs(%)(%)(%)2019eWind speed68Keck et al. (2019)measurement2013oWind speed23Lidar on flat terrainAlbers et al. (2013)measurement2015oWind speed1.12.2AnemometerClark (2015)measurement2016oWind speed12Anemometer;Smith et al. (2016)measurementindustry accepted2009eWind speed7VanLuvanee et al. (2009)modeling2010eWind speed426Flow model accuracyClive (2010)modeling2010eWind speed310Brower et al. (2010)modeling2011eWind speed25Faghani (2011)modeling2012eWind speed15.5Brown (2012)modeling2012eWind speed210Flow modelBrower (2012)modeling2013eWind speed1.76.9Abiven et al. (2013)modeling2015eWind speed1012Brower et al. (2015)modeling2017eWind speed35WAsPJog (2017)modeling2017eWind speed0.92Ensemble modelJog (2017)modeling2017eWind speed2.91.47.6Poulos (2017)modeling2019eWind speed2.52.5 % per km ofZhang et al. (2019)modelingextrapolation distance inWAsP; industry-recommended assumption2015oWind speed410Brower et al. (2015)modeling2016oWind speed1.24.3Weighted absolute totalNeubert (2016)modelingerror in WindFarmer
As in Fig. 8, the trend in energy-production loss: (a) estimated total curtailment loss, (b) observed total availability loss, and (c) estimated total wake loss. Note that the
ranges of the horizontal and vertical axes differ in each panel.
Data availability
Appendix B includes all the data used to generate the plots in this article.
Author contributions
JCYL performed the literature search, conducted the data analysis, and prepared the article. MJF provided guidance and reviewed the article.
Competing interests
The authors declare that they have no conflict of interest.
Acknowledgements
This work was authored by the National Renewable Energy Laboratory (NREL),
operated by the Alliance for Sustainable Energy, LLC, for the US Department of Energy (DOE), under contract no. DE-AC36-08GO28308. Funding provided by the US Department of Energy Office of Energy Efficiency and Renewable Energy Wind Energy Technologies Office. The views expressed in the article do not necessarily represent the views of the DOE or US Government. The US Government retains and the publisher, by accepting the article for publication, acknowledges that the US Government retains a nonexclusive, paid up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow others to do so, for US Government purposes.
The authors would like to thank our external collaborators including Matthew Breakey, Matthew Hendrickson, Kisha James, Cory Jog, and the American Wind Energy Association; our colleagues at NREL including Sheri Anstedt, Derek Berry, Rachel Eck, Julie Lundquist, Julian Quick, David Snowberg, Paul Veers, and the NREL library; Carlo Bottasso as our editor, Mark Kelly as our peer reviewer, and one anonymous referee.
Financial support
This research has been supported by the US Department of Energy (grant no. DE-AC36-08GO28308).
Review statement
This paper was edited by Carlo L. Bottasso and reviewed by Mark Kelly and one anonymous referee.
References
Abascal, A., Herrero, M., Torrijos, M., Dumont, J., Álvarez, M., and
Casso, P.: An approach for estimating energy losses due to ice in
pre-construction energy assessments, in: WindEurope 2019, WindEurope, Bilbao,
Spain, 2019.
Abiven, C., Brady, O., and Triki, I.: Mesoscale and CFD Coupling for Wind
Resource Assessment, in: AWEA Wind Resource and Project Energy Assessment
Workshop 2013, AWEA, Las Vegas, NV, 2013.
Abiven, C., Parisse, A., Watson, G., and Brady, O.: CFD Wake Modeling: Where
Do We Stand?, in: AWEA Wind Resource and Project Energy Assessment Workshop 2014, AWEA, Orlando, FL, 2014.
Albers, A., Klug, H., and Westermann, D.: Outdoor Comparison of Cup Anemometers, in: German wind energy conference, DEWEK 2000, Wilhelmshaven, Germany, p. 5, 2000.Albers, A., Franke, K., Wagner, R., Courtney, M., and Boquet, M.:
Ground-based remote sensor uncertainty – a case study for a wind lidar,
available at:
https://www.researchgate.net/publication/267780849_Ground-based_remote_sensor_uncertainty_-_a_case_study_for_a_wind_lidar (last access: 31 October 2020), 2013.
Anderson, M.: Seasonality, Stability and MCP, in: AWEA Wind Resource and Project Energy Assessment Workshop 2008, AWEA, Portland, OR, 2008.
Apple, J.: Wind Farm Power Curves: Guidelines for New Applications, in: AWEA
Wind Resource and Project Energy Assessment Workshop 2015, AWEA, New Orleans, LA, 2015.
AWS Truepower: Closing The Gap On Plant Underperformance: A Review and
Calibration of AWS Truepower's Energy Estimation Methods, AWS Truepower, LLC, Albany, NY, 2009.AWS Truepower: AWS Truepower Loss and Uncertainty Methods, Albany, NY,
available at:
https://www.awstruepower.com/assets/AWS-Truepower-Loss-and-Uncertainty-Memorandum-5-Jun-2014.pdf
(last access: 29 August 2017), 2014.
Balfrey, D.: Data Processing, in: AWEA Wind Resource and Project Energy
Assessment Workshop 2010, AWEA, Oklahoma City, OK, 2010.Barthelmie, R. J., Murray, F., and Pryor, S. C.: The economic benefit of
short-term forecasting for wind energy in the UK electricity market, Energy
Policy, 36, 1687–1696, 10.1016/J.ENPOL.2008.01.027, 2008.
Baughman, E.: Error Distributions, Tails, and Outliers, in: AWEA Wind Resource and Project Energy Assessment Workshop 2016, AWEA, Minneapolis, MN, 2016.
Beaucage, P., Kramak, B., Robinson, N., and Brower, M. C.: Modeling the
dynamic behavior of wind farm power generation: Building upon SCADA system
analysis, in: AWEA Wind Resource and Project Energy Assessment Workshop 2016,
AWEA, Minneapolis, MN, 2016.
Bernadett, D., Brower, M., Van Kempen, S., Wilson, W., and Kramak, B.: 2012 Backcast Study: Verifying AWS Truepower's Energy and Uncertainty Estimates, AWS Truepower, LLC, Albany, NY, 2012.
Bernadett, D., Brower, M., and Ziesler, C.: Loss Adjustment Refinement, in:
AWEA Wind Resource and Project Energy Assessment Workshop 2016, AWEA, Minneapolis, MN, 2016.
Bird, L., Cochran, J., and Wang, X.: Wind and Solar Energy Curtailment:
Experience and Practices in the United States, NREL/TP-6A20-60983, National Renewable Energy Laboratory, Golden, CO, 2014.
Bleeg, J.: Accounting for Blockage Effects in Energy Production Assessments,
in: AWEA Wind Resource and Project Energy Assessment Workshop 2018, AWEA,
Austin, TX, 2018.Bleeg, J., Purcell, M., Ruisi, R., and Traiger, E.: Wind Farm Blockage and
the Consequences of Neglecting Its Impact on Energy Production, Energies, 11, 1609, 10.3390/en11061609, 2018.
Breakey, M.: An Armchair Meteorological Campaign Manager: A Retrospective
Analysis, in: AWEA Wind Resource and Project Energy Assessment Workshop 2019,
AWEA, Renton, WA, 2019.
Brower, M.: What do you mean you're not sure? Concepts in uncertainty and risk management, in: AWEA Wind Resource and Project Energy Assessment Workshop 2011, AWEA, Seattle, WA, 2011.
Brower, M. C.: Wind resource assessment: a practical guide to developing a
wind project, Wiley, Hoboken, NJ, 2012.
Brower, M. C.: Measuring and Managing Uncertainty, in: AWEA Wind Resource and
Project Energy Assessment Workshop 2014, AWEA, Orlando, FL, 2014.
Brower, M. C.: State of the P50, in: AWEA WINDPOWER 2017, AWEA, Anaheim, CA, 2017.
Brower, M. C. and Robinson, N. M.: Validation of the openWind Deep Array Wake Model (DAWM), in: AWEA Wind Resource and Project Energy Assessment Workshop 2013, AWEA, Las Vegas, NV, 2013.
Brower, M. C., Robinson, N. M., and Hale, E.: Wind Flow Modeling Uncertainty:
Quantification and Application to Monitoring Strategies and Project Design,
AWS Truepower, LLC, Albany, NY, 2010.
Brower, M. C., Bernadett, D., Van Kempen, S., Wilson, W., and Kramak, B.:
Actual vs. Predicted Plant Production: The Role of Turbine Performance, in:
AWEA Wind Resource and Project Energy Assessment Workshop 2012, AWEA, Pittsburgh, PA, 2012.
Brower, M. C., Robinson, N. M., and Vila, S.: Wind Flow Modeling Uncertainty:
Theory and Application to Monitoring Strategies and Project Design, AWS Truepower, LLC, Albany, NY, 2015.
Brown, G.: Wakes: Ten Rows and Beyond, a Cautionary Tale!, in: AWEA Wind Resource and Project Energy Assessment Workshop 2012, AWEA, Pittsburgh, PA,
2012.
Byrkjedal, Ø., Hansson, J., and van der Velde, H.: Development of operational forecasting for icing and wind power at cold climate sites, in
IWAIS 2015: 16th International Workshop on Atmospheric Icing of Structures,
IWAIS, Uppsala, Sweden, p. 4, 2015.
Clark, S.: Wind Tunnel Comparison of Anemometer Calibration, in: AWEA Wind
Resource and Project Energy Assessment Workshop 2015, AWEA, New Orleans, LA, 2015.
Clifton, A., Smith, A., and Fields, M.: Wind Plant Preconstruction Energy
Estimates: Current Practice and Opportunities, NREL/TP-5000-64735, National Renewable Energy Laboratory, Golden, CO, 2016.
Clive, P.: Wind Farm Performance, in: AWEA Wind Resource and Project Energy
Assessment Workshop 2010, AWEA, Oklahoma City, OK, 2010.Colmenar-Santos, A., Campíez-Romero, S., Enríquez-Garcia, L.,
Pérez-Molina, C., Colmenar-Santos, A., Campíez-Romero, S.,
Enríquez-Garcia, L. A., and Pérez-Molina, C.: Simplified Analysis of
the Electric Power Losses for On-Shore Wind Farms Considering Weibull Distribution Parameters, Energies, 7, 6856–6885, 10.3390/en7116856, 2014.
Comstock, K.: Uncertainty and Risk Management in Wind Resource Assessment, in: AWEA Wind Resource and Project Energy Assessment Workshop 2011, AWEA,
Seattle, WA, 2011.
Comstock, K.: Identifying Pitfalls and Quantifying Uncertainties in Operating Project Re-Evaluation, in: AWEA Wind Resource and Project Energy Assessment Workshop 2012, AWEA, Pittsburgh, PA, 2012.Conroy, N., Deane, J. P., and, Ó Gallachóir, B. P.: Wind turbine
availability: Should it be time or energy based? – A case study in Ireland,
Renew. Energy, 36, 2967–2971, 10.1016/J.RENENE.2011.03.044, 2011.
Cox, S.: Validation of 25 offshore pre-construction energy forecasts against
real operational wind farm data, in: AWEA Wind Resource and Project Energy
Assessment Workshop 2015, AWEA, New Orleans, LA, 2015.Craig, A., Optis, M., Fields, M. J., and Moriarty, P.: Uncertainty quantification in the analyses of operational wind power plant performance, J. Phys. Conf. Ser., 1037, 052021, 10.1088/1742-6596/1037/5/052021, 2018.
Crease, J.: CFD Modelling of Mast Effects on Anemometer Readings, in:
WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Crescenti, G. H., Poulos, G. S., and Bosche, J.: Valuable Lessons From Outliers In A Wind Energy Resource Assessment Benchmark Study, in: AWEA Wind
Resource and Project Energy Assessment Workshop 2019, AWEA, Renton, WA, 2019.
Cushman, A.: Industry Survey of Wind Farm Availability, in: AWEA Wind Resource and Project Energy Assessment Workshop 2009, AWEA, Minneapolis, MN, 2009.
Dahlberg, J.-Å.: Assessment of the Lillgrund Windfarm, Report no. 21858-1, Vattenfall Vindkraft AB, Stockholm, Sweden, 28 pp., 2009.
Derrick, A.: Uncertainty: The Classical Approach, in: AWEA Wind Resource and
Project Energy Assessment Workshop 2009, AWEA, Minneapolis, MN, 2009.
Drees, H. M. and Weiss, D. J.: Compilation of Power Performance Test Results, in: AWEA Wind Resource and Project Energy Assessment Workshop 2012, AWEA, Pittsburgh, PA, 2012.
Drunsic, M. W.: Actual vs. Predicted Wind Project Performance: Is the Industry Closing the Gap?, in: AWEA Wind Resource and Project Energy Assessment Workshop 2012, AWEA, Pittsburgh, PA, 2012.
Dutrieux, A.: How long should be long term to reduce uncertainty on annual
wind energy assessment, in: WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Ehrmann, R. S., Wilcox, B., White, E. B., and Maniaci, D. C.: Effect of
Surface Roughness on Wind Turbine Performance, SAND2017-10669, Sandia National Laboratories, Albuquerque, NM and Livermore, CA, 2017.
Elkinton, M.: Strengthening Wake Models: DNV GL Validations & Advancements, in: AWEA Wind Resource and Project Energy Assessment Workshop 2013, AWEA, Las Vegas, NV, 2013.
Elkinton, M.: Current view of P50 estimate accuracy based on validation efforts, in: AWEA WINDPOWER 2017, AWEA, Anaheim, CA, 2017.
EMD International A/S: WindPRO 2.4., EMD International A/S, Aalborg, Denmark, 2004.
Faghani, D.: Measurement Uncertainty of Ground-Based Remote Sensing, in: AWEA
Wind Resource and Project Energy Assessment Workshop 2011, AWEA, Seattle, WA, 2011.
Faghani, D., Desrosiers, E., Aït-Driss, B., and Poulin, M.: Use of Remote Sensing in Addressing Bias & Uncertainty in Wind Measurements, in: AWEA Wind Resource and Project Energy Assessment Workshop 2008, AWEA, Portland, OR, 2008.
Faubel, A.: Digitalisation: Creating Value in O&M, in: WindEurope 2019,
WindEurope, Bilbao, Spain, 2019.
Filippelli, M., Bernadett, D., Sloka, L., Mazoyer, P., and Fleming, A.:
Concurrent Power Performance Measurements, in: AWEA Wind Resource and Project
Energy Assessment Workshop 2017, AWEA, Snowbird, UT, 2017.
Filippelli, M., Sherwin, B., and Fields, J.: IEC 61400-15 Working Group
Update, in: AWEA Wind Resource and Project Energy Assessment Workshop 2018,
AWEA, Austin, TX, 2018.
Friis Pedersen, T., Gjerding, S., Enevoldsen, P., Hansen, J. K., and
Jørgensen, H. K.: Wind turbine power performance verification in complex
terrain and wind farms, report no. Risoe-R 1330(EN), Forskningscenter Risoe, Roskilde, Denmark, 2002.
Garrad Hassan and Partners Ltd: GH WindFarmer Theory Manual, Bristol, England, 2009.
Geer, T.: Towards a more realistic uncertainty model, in: AWEA Wind Resource
and Project Energy Assessment Workshop 2014, AWEA, Orlando, FL, 2014.
Geer, T.: Identifying production risk in preconstruction assessments: Can we
do it?, in: AWEA Wind Resource and Project Energy Assessment Workshop 2015,
AWEA, New Orleans, LA, 2015.Germer, S. and Kleidon, A.: Have wind turbines in Germany generated electricity as would be expected from the prevailing wind conditions in 2000–2014?, edited by: Leahy, P., PLoS One, 14, e0211028,
10.1371/journal.pone.0211028, 2019.
Gillenwater, D., Masson, C., and Perron, J.: Wind Turbine Performance During
Icing Events, in: 46th AIAA Aerospace Sciences Meeting and Exhibit, American
Institute of Aeronautics and Astronautics, Reston, Virigina, 2008.
Gkarakis, K. and Orfanaki, G.: Historical wind speed trends and impact on
long-term adjustment and interannual variability in Cyprus, in: WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Graves, A., Harman, K., Wilkinson, M., and Walker, R.: Understanding Availiability Trends of Operating Wind Farms, in: AWEA WINDPOWER 2008, AWEA,
Houston, TX, 2008.
Halberg, E.: A Monetary Comparison of Remote Sensing and Tall Towers, in: AWEA Wind Resource and Project Energy Assessment Workshop 2017, AWEA, Snowbird, UT, 2017.
Halberg, E. and Breakey, M.: On-Shore Wake Validation Study: Wake Analysis
Based on Production Data, in: AWEA Wind Resource and Project Energy Assessment Workshop 2013, AWEA, Las Vegas, NV, 2013.
Hale, E.: External Perspectives: Estimate Accuracy and Plant Operations, in:
AWEA Wind Resource and Project Energy Assessment Workshop 2017, AWEA, Snowbird, UT, 2017.
Hamel, M.: Estimating 50-yr Extreme Wind Speeds from Short Datasets, in: AWEA
Wind Resource and Project Energy Assessment Workshop 2014, AWEA, Orlando, FL, 2014.Hamilton, S. D., Millstein, D., Bolinger, M., Wiser, R., and Jeong, S.: How
Does Wind Project Performance Change with Age in the United States?, Joule,
4, 1004–1020, 10.1016/j.joule.2020.04.005, 2020.
Hasager, C., Bech, J. I., Bak, C., Vejen, F., Madsen, M. B., Bayar, M., Skrzypinski, W. R., Kusano, Y., Saldern, M., Tilg, A.-M., Fæster, S., and
Johansen, N. F.-J.: Solution to minimize leading edge erosion on turbine blades, in: WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Hatlee, S.: Measurement Uncertainty in Wind Resource Asessment, in: AWEA Wind
Resource and Project Energy Assessment Workshop 2011, AWEA, Seattle, WA, 2011.
Hatlee, S.: Operational Performance vs. Precon Estimate, in: AWEA Wind Resource and Project Energy Assessment Workshop 2015, AWEA, New Orleans, LA, 2015.
Healer, B.: Liquid Power Markets 201, in: AWEA Wind Resource and Project Energy Assessment Workshop 2018, AWEA, Austin, TX, 2018.
Hendrickson, M.: 2009 AWEA Wind Resource & Project Energy Assessment Workshop – Introduction, in: AWEA Wind Resource and Project Energy Assessment Workshop 2009, AWEA, Minneapolis, MN, 2009.
Hendrickson, M.: Extending Data – by whatever means necessary, in: AWEA Wind
Resource and Project Energy Assessment Workshop 2010, AWEA, Oklahoma City, OK, 2010.
Hendrickson, M.: Industry Survey of Wind Energy Assessment Techniques, in:
AWEA Wind Resource and Project Energy Assessment Workshop 2011, AWEA, Seattle, WA, 2011.
Hendrickson, M.: Extreme Winds in the Suitability Context: Should we be
Concerned?, in: AWEA Wind Resource and Project Energy Assessment Workshop 2014, AWEA, Orlando, FL, 2014.
Hendrickson, M.: P50 Bias Update: Are we there yet?, in: AWEA Wind Resource
and Project Energy Assessment Workshop 2019, AWEA, Renton, WA, 2019.
Hill, N., Pullinger, D., Zhang, M., and Crutchley, T.: Validation of windfarm
downtime modelling and impact on grid-constrained projects, in: WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Holtslag, E.: Improved Bankability: The Ecofys position on LiDAR use, Utrecht, the Netherlands, 2013.
Horn, B.: Achieving Measurable Financial Results in Operational Assessments,
in: AWEA Wind Resource and Project Energy Assessment Workshop 2009, AWEA,
Minneapolis, MN, 2009.
Istchenko, R.: WRA Uncertainty Validation, in: AWEA Wind Resource and Project
Energy Assessment Workshop 2014, AWEA, Orlando, FL, 2014.
Istchenko, R.: Re-examining Uncertainty and Bias, in: AWEA Wind Resource and
Project Energy Assessment Workshop 2015, AWEA, New Orleans, LA, 2015.
Jaynes, D.: The Vestas Operating Fleet: Real-World Experience in Wind Turbine Siting and Power Curve Verification, in: AWEA Wind Resource and Project Energy Assessment Workshop 2012, AWEA, Pittsburgh, PA, 2012.
Jog, C.: Benchmark: Wind flow, in: AWEA Wind Resource and Project Energy
Assessment Workshop 2017, AWEA, Snowbird, UT, 2017.
Johnson, C.: Actual vs. Predicted performance – Validating pre construction
energy estimates, in: AWEA Wind Resource and Project Energy Assessment
Workshop 2012, AWEA, Pittsburgh, PA, 2012.
Johnson, C., White, E., and Jones, S.: Summary of Actual vs. Predicted Wind
Farm Performance: Recap of WINDPOWER 2008, in: AWEA Wind Resource and Project
Energy Assessment Workshop 2008, AWEA, Portland, OR, 2008.
Johnson, J.: Typical Availability Losses and Categorization: Observations from an Operating Project Portfolio, in: AWEA Wind Resource and Project Energy Assessment Workshop 2011, AWEA, Seattle, WA, 2011.
Jones, S.: Project Underperformance: 2008 Update, in: AWEA WINDPOWER 2008,
AWEA, Houston, TX, 2008.
Kassebaum, J.: Power Curve Testing with Remote Sensing Devices, in: AWEA Wind
Resource and Project Energy Assessment Workshop 2013, AWEA, Las Vegas, NV, 2013.
Kassebaum, J. L.: What p-level is your p-ower curve?, in: AWEA Wind Resource
and Project Energy Assessment Workshop 2015, AWEA, New Orleans, LA, 2015.
Keck, R.-E., Sondell, N., and Håkansson, M.: Validation of a fully numerical approach for early stage wind resource assessment in absence of
on-site measurements, in: WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Kelly, M.: Uncertainty in vertical extrapolation of wind statistics: shear-exponent and WAsP/EWA methods, No. 0121, DTU Wind Energy, Roskilde, Denmark, 2016.
Kelly, M., Kersting, G., Mazoyer, P., Yang, C., Hernández Fillols, F.,
Clark, S., and Matos, J. C.: Uncertainty in vertical extrapolation of measured wind speed via shear, No. E-0195, DTU Wind Energy, Roskilde, Denmark, 2019.
Kim, K. and Shin, P.: Analysis on the Parameters Under the Power Measurement
Uncertainty for a Small Wind Turbine, in: WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Kline, J.: Wind Farm Wake Analysis: Summary of Past & Current Work, in: AWEA Wind Resource and Project Energy Assessment Workshop 2013, AWEA, Las Vegas, NV, 2013.
Kline, J.: Wake Model Validation Test, in: AWEA Wind Resource and Project
Energy Assessment Workshop 2016, AWEA, Minneapolis, MN, 2016.
Kline, J.: Detecting and Correcting for Bias in Long-Term Wind Speed Estimates, in: AWEA Wind Resource and Project Energy Assessment Workshop 2019, AWEA, Renton, WA, 2019.Lackner, M. A., Rogers, A. L., and Manwell, J. F.: Uncertainty Analysis in
MCP-Based Wind Resource Assessment and Energy Production Estimation, J. Sol.
Energ. Eng., 130, 31006–31010, 10.1115/1.2931499, 2008.
Langel, C. M., Chow, R., Hurley, O. F., van Dam, C. P., Ehrmann, R. S., White, E. B., and Maniaci, D.: Analysis of the Impact of Leading Edge Surface
Degradation on Wind Turbine Performance, in: AIAA SciTech 33rd Wind Energy
Symposium, American Institute of Aeronautics and Astronautics, Kissimmee, FL, p. 13, 2015.
Langreder, W.: Uncertainty of Vertical Wind Speed Extrapolation, in: AWEA Wind Resource and Project Energy Assessment Workshop 2017, AWEA, Snowbird, UT, 2017.
Latoufis, K., Riziotis, V., Voutsinas, S., and Hatziargyriou, N.: Effects of
leading edge erosion on the power performance and acoustic noise emissions of locally manufactured small wind turbines blades, in: WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Lee, J.: Banter on Blockage, in: AWEA Wind Resource and Project Energy
Assessment Workshop 2019, AWEA, Renton, WA, 2019.Lee, J. C. Y., Fields, M. J., and Lundquist, J. K.: Assessing variability of
wind speed: comparison and validation of 27 methodologies, Wind Energ. Sci.,
3, 845–868, 10.5194/wes-3-845-2018, 2018.
Liew, J., Urbán, A. M., Dellwick, E., and Larsen, G. C.: The effect of
wake position and yaw misalignment on power loss in wind turbines, in:
WindEurope 2019, WindEurope, Bilbao, Spain, 2019.Lunacek, M., Fields, M. J., Craig, A., Lee, J. C. Y., Meissner, J., Philips,
C., Sheng, S., and King, R.: Understanding Biases in Pre-Construction Estimates, J. Phys. Conf. Ser., 1037, 062009,
10.1088/1742-6596/1037/6/062009, 2018.Maniaci, D. C., White, E. B., Wilcox, B., Langel, C. M., van Dam, C. P., and
Paquette, J. A.: Experimental Measurement and CFD Model Development of Thick
Wind Turbine Airfoils with Leading Edge Erosion, J. Phys. Conf. Ser., 753, 022013, 10.1088/1742-6596/753/2/022013, 2016.
McAloon, C.: Wind Assessment: Raw Data to Hub Height Winds, in: AWEA Wind
Resource and Project Energy Assessment Workshop 2010, AWEA, Oklahoma City, OK, 2010.
McCaa, J.: Wake modeling at 3TIER, in: AWEA Wind Resource and Project Energy
Assessment Workshop 2013, AWEA, Las Vegas, NV, 2013.
Medley, J. and Smith, M.: The “Why?”, “What?” and “How?” of lidar type
classification, in: WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Mibus, M.: Conservatism in Shadow Flicker Assessment and Wind Farm Design, in: AWEA Wind Resource and Project Energy Assessment Workshop 2018, AWEA,
Austin, TX, 2018.
Mönnich, K., Horodyvskyy, S., and Krüger, F.: Comparison of Pre-Construction Energy Yield Assessments and Operating Wind Farm's Energy Yields, UL International GmbH – DEWI, Oldenburg, Germany, 2016.
Mortensen, N. G.: Planning and Development of Wind Farms: Wind Resource
Assessment and Siting, report no. 0045(EN), DTU Wind Energy, Roskilde, Denmark, 2013.
Mortensen, N. G. and Ejsing Jørgensen, H.: Comparative Resource and
Energy Yield Assessment Procedures (CREYAP) Pt. II, in: EWEA Technology Workshop: Resource Assessment 2013, Dublin, Ireland, 2013.
Mortensen, N. G., Ejsing Jørgensen, H., Anderson, M., and Hutton, K.-A.:
Comparison of resource and energy yield assessment procedures, in: Proceedings of EWEA 2012 – European Wind Energy Conference & Exhibition
European Wind Energy Association (EWEA), Technical Universtiy of Denmark, Copenhagen, Denmark, p. 10, 2012.
Mortensen, N. G., Nielsen, M., and Ejsing Jørgensen, H.: Comparison of
Resource and Energy Yield Assessment Procedures 2011–2015: What have we learned and what needs to be done?, in: Proceedings of the European Wind
Energy Association Annual Event and Exhibition 2015, European Wind Energy
Association, Paris, France, 2015a.
Mortensen, N. G., Nielsen, M., and Ejsing Jørgensen, H.: EWEA CREYAP benchmark exercises: summary for offshore wind farm cases, Technical Universtiy of Denmark, Roskilde, Denmark, 2015b.
Murphy, O.: Blade Erosion Performance Impact, in: 21st Meeting of the Power
Curve Working Group, PCWG, Glasgow, Scotland, 2016.
Neubert, A.: WindFarmer White Paper, DNV GL, Oldenburg, Germany, 2016.
Nielsen, P., Villadsen, J., Kobberup, J., Madsen, P., Jacobsen, T., Thøgersen, M. L., Sørensen, M. V., Sørensen, T., Svenningsen, L.,
Motta, M., Bredelle, K., Funk, R., Chun, S., and Ritter, P.: WindPRO 2.7 User
Guide, 3rd Edn., Aalborg, Denmark, 2010.Olauson, J., Edström, P., and Rydén, J.: Wind turbine performance
decline in Sweden, Wind Energy, 20, 2049–2053, 10.1002/we.2132, 2017.
Osler, E.: Yaw Error Detection and Mitigation with Nacelle Mounted Lidar, in:
AWEA Wind Resource and Project Energy Assessment Workshop 2013, AWEA, Las
Vegas, NV, 2013.
Ostridge, C.: Understanding & Predicting Turbine Performance, in: AWEA Wind Resource and Project Energy Assessment Workshop 2014, AWEA, Orlando, FL, 2014.
Ostridge, C.: Using Pattern of Production to Validate Wind Flow, Wakes, and
Uncertainty: Using Pattern of Production to Validate Wind Flow, Wakes, and
Uncertainty, in: AWEA Wind Resource and Project Energy Assessment Workshop 2015, AWEA, New Orleans, LA, 2015.
Ostridge, C.: Wind Power Project Performance White Paper 2017 Update, DNV GL, Seattle, WA, 2017.
Ostridge, C. and Rodney, M.: Modeling Wind Farm Energy, Revenue and Uncertainty on a Time Series Basis, in: AWEA Wind Resource and Project Energy
Assessment Workshop 2016, AWEA, Minneapolis, MN, 2016.
Papadopoulos, I.: DNV GL Energy Production Assessment Validation 2019, report no. L2C183006-UKBR-R-01, DNV GL – Energy, Bristol, England, 2019.
Pedersen, H. S. and Langreder, W.: Hack the Error Codes of a Wind Turbine,
in: WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Perry, A.: Cross Validation of Operational Energy Assessments, in: AWEA Wind
Resource and Project Energy Assessment Workshop 2017, AWEA, Snowbird, UT, 2017.
Peyre, N.: How can drones improve topography inspections, terrain modelling and energy yield assessment?, in: WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Poulos, G. S.: Complex Terrain Mesoscale Wind Flow Modeling: Successes, Failures and Practical Advice, in: AWEA Wind Resource and Project Energy
Assessment Workshop 2017, AWEA, Snowbird, UT, 2017.
Pram, M.: Analysis of Vestas Turbine Performance, in: AWEA Wind Resource and
Project Energy Assessment Workshop 2018, AWEA, Austin, TX, 2018.Pryor, S. C., Barthelmie, R. J., and Schoof, J. T.: Inter-annual variability
of wind indices across Europe, Wind Energy, 9, 27–38, 10.1002/we.178, 2006.
Pullinger, D., Ali, A., Zhang, M., Hill, M., and Crutchley, T.: Improving
accuracy of wind resource assessment through feedback loops of operational
performance data: a South African case study, in: WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Randall, G.: Energy Assessment Uncertainty Analysis, in: AWEA Wind Resource
and Project Energy Assessment Workshop 2009, AWEA, Minneapolis, MN, 2009.
Redouane, A.: Analysis of pre- and post construction wind farm energy yields
with focus on uncertainties, Universität Kassel, Kassel, 2014.
Rezzoug, M.: Innovative system for performance optimization: Independent data to increase AEP and preserve turbine lifetime, in: WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Rindeskär, E.: Modelling of icing for wind farms in cold climate: A comparison between measured and modelled data for reproducing and predicting
ice accretion, Examensarbete vid Institutionen för geovetenskaper,
MS thesis, Uppsala University, Disciplinary Domain of Science and Technology, Earth Sciences, Department of Earth Sciences, LUVAL, Uppsala, Sweden, ISSN 1650-6553, 2010.
Robinson, N.: Blockage Effect Update, in: AWEA Wind Resource and Project
Energy Assessment Workshop 2019, AWEA, Renton, WA, 2019.
Rogers, A. L., Rogers, J. W., and Manwell, J. F.: Uncertainties in Results of
Measure-Correlate-Predict Analyses, in: European Wind Energy Conference 2006,
Athens, Greece, p. 10, 2006.
Rogers, T.: Effective Utilization of Remote Sensing, in: AWEA Wind Resource
and Project Energy Assessment Workshop 2010, AWEA, Oklahoma City, OK, 2010.
Rogers, T.: Estimating Long-Term Wind Speeds, in: AWEA Wind Resource and
Project Energy Assessment Workshop 2011, AWEA, Seattle, WA, 2011.Sareen, A., Sapre, C. A., and Selig, M. S.: Effects of leading edge erosion on wind turbine blade performance, Wind Energy, 17, 1531–1542,
10.1002/we.1649, 2014.Schramm, M., Rahimi, H., Stoevesandt, B., and Tangager, K.: The Influence of
Eroded Blades on Wind Turbine Performance Using Numerical Simulations,
Energies, 10, 1420, 10.3390/en10091420, 2017.Shihavuddin, A., Chen, X., Fedorov, V., Nymark Christensen, A., Andre Brogaard Riis, N., Branner, K., Bjorholm Dahl, A., and Reinhold Paulsen, R.: Wind Turbine Surface Damage Detection by Deep Learning Aided Drone Inspection Analysis, Energies, 12, 676, 10.3390/en12040676, 2019.
Sieg, C.: Validation Through Variation: Using Pattern of Production to Validate Wind Flow, Wakes, and Uncertainty, in: AWEA Wind Resource and Project Energy Assessment Workshop 2015, AWEA, New Orleans, LA, 2015.
Simon, R. L.: Long-term Inter-annual Resource Variations in California, in:
Wind Power, Palm Springs, California, 236–243, 1991.
Slater, J.: Floating lidar uncertainty reduction for use on operational wind
farms, in: WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Slinger, C. W., Harris, M., Ratti, C., Sivamani, G., and Smith, M.: Nacelle
lidars for wake detection and waked inflow energy loss estimation, in:
WindEurope 2019, WindEurope, Bilbao, Spain, 2019a.
Slinger, C. W., Sivamani, G., Harris, M., Ratti, C., and Smith, M.: Wind yaw
misalignment measurements and energy loss projections from a multi-lidar
instrumented wind farm, in: WindEurope 2019, WindEurope, Bilbao, Spain, 2019b.
Smith, M., Wylie, S., Woodward, A., and Harris, M.: Turning the Tides on Wind
Measurements: The Use of Lidar to Verify the Performance of A Meteorological
Mast, in: WindEurope 2016, WindEurope, 2016.
Spalding, T.: Wind Farm Blockage Modeling Summary, in: AWEA Wind Resource and
Project Energy Assessment Workshop 2019, AWEA, Renton, WA, 2019.
Spengemann, P. and Borget, V.: Review and analysis of wind farm operational
data validation of the predicted energy yield of wind farms based on real
energy production data, DEWI Group, Wilhelmshaven, Germany and Lyon, France, 2008.
Spruce, C. J. and Turner, J. K.: Pitch Control for Eliminating Tower Vibration Events on Active Stall Wind Turbines, Surrey, UK, 2006.Staffell, I. and Green, R.: How does wind farm performance decline with age?, Renew. Energy, 66, 775–786, 10.1016/j.renene.2013.10.041, 2014.
Standish, K., Rimmington, P., Laursen, J., Paulsen, H., and Nielsen, D.:
Computational Predictions of Airfoil Roughness Sensitivity, in: 48th AIAA
Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace
Exposition, American Institute of Aeronautics and Astronautics, Reston,
Virigina, 2010.
Stehly, T., Beiter, P., Heimiller, D., and Scott, G.: 2017 Cost of Wind
Energy Review, NREL/TP-6A20-72167, National Renewable Energy Laboratory, Golden, CO, 2018.
Stoelinga, M.: A Multi-Project Validation Study of a Time Series-Based Wake
Model, in WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Stoelinga, M. and Hendrickson, M.: A Validation Study of Vaisala's Wind
Energy Assessment Methods, Vaisala, Seattle, WA, 2015.
Tchou, J.: Successfully Transitioning Pre-Construction Measurements to Post-Construction Operations, in: AWEA Wind Resource and Project Energy
Assessment Workshop 2012, AWEA, Pittsburgh, PA, 2012.
Tindal, A.: Wake modelling and validation, in: AWEA Wind Resource and Project
Energy Assessment Workshop 2009, AWEA, Minneapolis, MN, 2009.
Trudel, S.: Icing Losses Estimate Vadliation: From Development To Operation,
in: AWEA Wind Resource and Project Energy Assessment Workshop 2016, AWEA,
Minneapolis, MN, 2016.
VanLuvanee, D., Rogers, T., Randall, G., Williamson, A., and Miller, T.:
Comparison of WAsP, MS-Micro/3, CFD, NWP, and Analytical Methods for Estimating Site-Wide Wind Speeds, in: AWEA Wind Resource and Project Energy
Assessment Workshop 2009, AWEA, Minneapolis, MN, 2009.Veers, P., Dykes, K., Lantz, E., Barth, S., Bottasso, C. L., Carlson, O.,
Clifton, A., Green, J., Green, P., Holttinen, H., Laird, D., Lehtomäki,
V., Lundquist, J. K., Manwell, J., Marquis, M., Meneveau, C., Moriarty, P.,
Munduate, X., Muskulus, M., Naughton, J., Pao, L., Paquette, J., Peinke, J.,
Robertson, A., Sanz Rodrigo, J., Sempreviva, A. M., Smith, J. C., Tuohy, A.,
and Wiser, R.: Grand challenges in the science of wind energy, Science, 366, 6464, 10.1126/science.aau2027, 2019.
Walls, L.: A New Method to Assess Wind Farm Performance and Quantify Model
Uncertainty, in: AWEA Wind Resource and Project Energy Assessment Workshop 2018, AWEA, Austin, TX, 2018.
Walter, K.: Wind Assessment: Raw Data to Hub Height Winds, in: AWEA Wind Resource and Project Energy Assessment Workshop 2010, AWEA, Oklahoma City, OK, 2010.Waskom, M., Botvinnik, O., Ostblom, J., Lukauskas, S., Hobson, P., MaozGelbart, Gemperline, D. C., Augspurger, T., Halchenko, Y., Cole, J. B.,
Warmenhoven, J., Ruiter, J. de, Pye, C., Hoyer, S., Vanderplas, J., Villalba, S., Kunter, G., Quintero, E., Bachant, P., Martin, M., Meyer, K., Swain, C., Miles, A., Brunner, T., O'Kane, D., Yarkoni, T., Williams, M. L., and Evans, C.: mwaskom/seaborn: v0.10.0, Zenodo, 10.5281/zenodo.3629446, 2020.
White, E.: Continuing Work on Improving Plant Performance Estimates, in: AWEA
Wind Resource and Project Energy Assessment Workshop 2008, AWEA, Portland, OR, 2008a.
White, E.: Understanding and Closing the Gap on Plant Performance, in: AWEA
WINDPOWER 2008, AWEA, Houston, TX, 2008b.
White, E.: Operational Performance: Closing the Loop on Pre-Construction
Estimates, in: AWEA Wind Resource and Project Energy Assessment Workshop 2009, AWEA, Minneapolis, MN, 2009.
Wilcox, B. J., White, E. B., and Maniaci, D. C.: Roughness Sensitivity Comparisons of Wind Turbine Blade Sections, Albuquerque, NM, 2017.
Wilkinson, L., Kay, E., and Lawless, M.: Braced for the Storm? Startling
Insights into the Impact of Climate Change on Offshore Wind Operations, in:
WindEurope 2019, WindEurope, Bilbao, Spain, 2019.
Wilks, D. S.: Statistical methods in the atmospheric sciences, Academic Press, Amsterdam, the Netherlands, 2011.
Winslow, G.: Secondary Losses: Using Operational Data to Evaluate Losses and
Revisit Estimates, in: AWEA Wind Resource and Project Energy Assessment
Workshop 2012, AWEA, Pittsburgh, PA, 2012.
Wiser, R., Bolinger, M., Barbose, G., Barghouth, N., Hoen, B., Mills, A.,
Rand, J., Millstein, D., Jeong, S., Porter, K., Disanti, N., and Oteri, F.:
2018 Wind Technologies Market Report, US Department of Energy, Office of Energy Efficiency & Renewable Energy, Washington, D.C., 2019.
Wolfe, J.: Deep Array Wake Loss in Large Onshore Wind Farms (A Model Validation), in: AWEA Wind Resource and Project Energy Assessment Workshop 2010, AWEA, Oklahoma City, OK, 2010.
Žagar, M.: Wind Resource from an OEM perspective, in: WindEurope 2019,
WindEurope, Bilbao, Spain, 2019.
Zhang, M., Pullinger, D., Hill, N., and Crutchley, T.: Validating wind flow
model uncertainty using operational data, in: WindEurope 2019, AWEA, Renton,
WA, 2019.