Calculating PV Degradation Rates Using Open-Source Software: Page 3 of 4

Methodological Improvements

Researchers at NREL and other industry stakeholders have tested and compared many Rd calculation methodologies, weighing factors such as ease of use and the amount of time needed to determine a degradation rate with a relatively low degree of uncertainty. RdTools not only compares favorably in these regards, but also offers statistically robust analysis in relation to common problems associated with data quality. Specifically, RdTools avoids errors associated with linear regressions and tolerates imperfect sensor data as well as seasonality and seasonal soiling.

YOY analysis. The YOY analysis method in RdTools represents an improvement over classic linear regression analyses. The problem with regression line slopes is that these are sensitive to data outliers near the beginning or end of the line, as terminal data have high statistical leverage in regression analyses. Objectively filtering for outliers in a regression analysis is complex, as the filter needs to move in tandem with the unknown degradation rate to follow the gradual downward shift of the data.

In their IEEE Journal of Photovoltaics article “Robust PV Degradation Methodology and Application” (see Resources), Dirk Jordan and his co-authors found that a YOY method of calculating Rd reduced uncertainty relative to two different types of linear regression analyses. Because the YOY analysis calculates a median value from a distribution of Rd slopes, it is less sensitive to data outliers, as well as snow and soiling events. The YOY method is also resilient to data shifts, which often occur as the result of software changes or maintenance events such as sensor replacement.

If a data shift is subtle enough to go unnoticed, it can influence the results of linear regression analyses. By contrast, a median YOY Rd value is resistant to the influence of this type of data shift, as it will appear as an outlier on the histogram in a YOY analysis. Missing data have a similar effect. If end-of-year data are missing, data analysts conducting a linear regression analysis need to eliminate data for the last fraction of the year so that seasonal effects do not have an undue influence on the Rd results. The YOY technique, meanwhile, is tolerant of seasonal issues, meaning that analysts can use the full data collection time span, including fractional years.

Another problem with linear regression analyses is that they assume linearity. In the real world, however, linearity is not necessarily the case, as Jordan and others have shown in the Progress in Photovoltaics article “PV Degradation Curves: Non-Linearities and Failure Modes” (see Resources). RdTools’ YOY analysis method limits the impact of nonlinearity by showing a distribution of degradation rates rather than a single value. If a system has, for example, two different degradation rates, switching from one to another at some point in time, users may see a pair of bumps in the histogram instead of a single peak. To detect nonlinear degradation, RdTools users can analyze multiple periods of time in 2-plus–year increments to estimate these different Rd values.

The assumption of linearity is also problematic if Rd calculations are conflated with the accuracy of nameplate ratings. The “Compendium of Photovoltaic Degradation Rates” (see Resources) compiles more than 11,000 degradation rates, revealing different findings for studies that relied on one performance measurement only compared to more detailed analysis. In particular, taking only one unconfirmed data point for performance and relying on the nameplate rating instead of performance data may be inaccurate, especially in the case of older modules where the nameplate rating may have been slightly under- or overestimated. Newer PV modules have tended to demonstrate more accurate nameplate ratings, and quality modules are typically rated to take into account initial stabilization. The authors conclude that Rd calculations based on multiple clear-sky measurements, including initial post-stabilization values, are more accurate than those based on nameplate ratings and a single performance data point only.

Article Discussion

Related Articles