PV Performance Modeling

Assessing Variability, Uncertainty and Sensitivity

Accurately predicting PV system performance is one of the most important tasks in the PV industry. Expected performance informs design decisions, serves as the basis of financial models, drives power purchase agreement prices and can make or break a potential project.

The solar industry needs consistent and reproducible energy performance estimates to satisfy investors and independent engineers. Even a 1% difference among predicted energy production results can have a significant impact on project financing and investor confidence. However, the industry lacks established rules and standards for using performance modeling tools. As a result, performance estimates that different parties produce for the same project can show large discrepancies.

To illustrate this issue, we provided a group of industry stakeholders—project developers, software developers and researchers—with the example set of PV system specifications in Table 1. We asked these subject matter experts to predict system performance using whatever software and assumptions they deemed most appropriate. When we analyzed their results, we found a startling 13% difference between the maximum and minimum energy predicted.

Since this 13% delta could make the difference between whether a project happens or not, we wanted to understand what accounts for this spread in the modeling results, and which factors have the greatest impact on the results. To accomplish this, we compared not only the end results, but also the individual results associated with each step or simulation in the performance model process, as shown in Figure 1. While these results would not necessarily hold true for every project type and location, they are reasonably indicative of the factors that cause the largest discrepancies from one model to another.

In this article, we explore the largest sources of variability identified in our group modeling exercise and discuss each of the factors in Figure 1 that account for at least a 1% variation in the modeled results: global horizontal irradiance (GHI), soiling, irradiance level and temperature, wire losses, mismatch losses and light-induced degradation (LID). We also discuss incidence angle and solar transposition effects, which can account for significant differences in modeled results even though everyone in our test group used similar assumptions. Note that this article is not intended as an introduction to PV system performance modeling, but rather as an aid to help experienced modelers make more-informed decisions when working with modeling tools. If you are new to this topic, we recommend that you first read “Production Modeling for Grid-Tied PV Systems(SolarPro, April/May 2010) and “Performance Modeling Tools Overview”.

Global Horizontal Irradiance

GHI is the measure of the quantity of sunlight falling on a flat plane on the earth’s surface. There are two distinct components to GHI: direct irradiance, light that comes directly from the sun, and diffuse irradiance, light that comes from the dome of the sky under both clear and cloudy conditions. The quantity of GHI at a location is the starting point for determining the amount of irradiance that will fall on an array. The weather dataset determines this value.

For our group modeling exercise, we selected Santa Fe Springs, California—a city in southeast Los Angeles County—as the project location. The modelers in our group used weather files from six different sources to represent the solar resource in Santa Fe Springs. Since the GHI data between these files varied by as much as 5.4%, the expected solar resource accounted for the single largest source of variability in the modeling results. This variability due to weather data is not unexpected, as illustrated in Figure 2. According to modeling steps outlined on the PV Performance Modeling Collaborative website (pvpmc.sandia.gov), “uncertainty in the weather data usually accounts for a large amount of the total uncertainty.”

Weather data selection is one of the most challenging aspects of the modeling process. In effect, we are using historical weather data to predict future performance. However, as Niels Bohr, the Nobel Prize–winning Danish physicist, famously noted: “Prediction is very difficult, especially if it’s about the future.” Weather and weather patterns are inherently variable. There are also inherent uncertainties in the weather data associated with the accuracy of the measurement methods used to collect these data.

While modelers often have a large number of historical weather files from which to choose, the industry lacks an established protocol for identifying the most appropriate dataset. In the absence of hard-and-fast rules, we recommend evaluating and selecting weather data based on criteria such as period of record, source and quality, representativeness and agreement with other applicable sources.


Article Discussion