Achieving Commercial Operations in Large-Scale PV Power Systems: Page 2 of 6

CRITICAL NEGOTIATIONS

Having sat on all sides of the negotiating table when COD was looming, we are strong proponents of an open performance-evaluation model based on mutually agreed upon expectations and a reasonable assignment of risk. It is possible and, indeed, preferable to navigate commissioning, start-up, testing and project completion in a way that is acceptable to all interested parties; that facilitates and expedites final payments; and, most important, that provides a detailed characterization of expected plant behavior. A process built around mutual agreement and consent best serves this outcome.

Once you have assembled a project team, it is critically important for stakeholders to engage in a candid discussion of performance test methods, objectives and constraints. These early planning decisions will guide the team members during project development and construction through the COD milestone. The below topics always come up during the project testing phase and invariably cause problems when team members have conflicting expectations. We recommend discussing these subjects at project inception, establishing clear rules and contractual definitions, and revisiting the plan often.

Testing model. It is essential for team members to develop an energy model specifically for the performance test. The testing model will be similar to the accepted annual energy model, but it will be tuned to reflect the expected conditions at the time of testing. Develop a testing model that reflects contractual obligations above all else, meaning that contract language and terms should inform the modeling assumptions and performance risk allocations. The testing model must be dynamic and able to adapt to changes in design, implementation, testing methods and site conditions.

Uncertainty. All operational measurements have uncertainty, and the performance testing process must acknowledge this fact. Ignoring or negating uncertainty fails to allocate risk equitably. The argument that measurement uncertainty “can go either way” only applies if the installing contractor is contractually incentivized for performance in excess of 100%. As a starting point, we recommend estimating measurement uncertainty at 2%. Team members can revise this value after finalizing equipment selection and completing the performance test plan.

Module output. Assign the risk associated with increased nameplate power ratings to whichever party buys the PV modules. If the installing contractor procures the modules, then it can dictate how much positive power tolerance it will backstop. If the owner buys the modules, the installing contractor has no recourse in the event that the project does not realize an expected increase in power; in this scenario, it may not be appropriate to include assumptions of positive power tolerance in the performance evaluation model.

Soiling. The possibility of zero percent soiling is a myth, especially in the context of long-duration performance tests. Contracts for performance testing must include a soiling allowance in some form, through either direct measurement at the time of testing or a reasonable estimate based on the wash cycle prior to testing. Reliably assessing soiling at the time of testing dramatically improves troubleshooting efforts and investigations of performance shortfalls. (See “Soiling Assessment in Large-Scale PV Arrays,” SolarPro, November/December 2016.)

Loss models. AC loss, dc loss, transformer efficiency and inverter efficiency assumptions mature over time. Any model used for performance evaluation must evolve as the team better quantifies these values through design, equipment selection and installation. Equipment test sheets, particularly for transformers, are a good source of the data. When modeled and measured quantities diverge during testing, you can usually trace the root cause back to unrevised model assumptions that made their way to the testing phase.

Test methods. We strongly recommend using unmodified, standard test methods and shared evaluation tools. For example, the American Society of Testing and Materials (ASTM) has published a PV performance test standard (ASTM E2848-13) and the International Electrotechnical Commission (IEC) has published a suite of technical standards for PV system performance monitoring (IEC 61724-1), capacity testing (IEC 61724-2) and energy yield evaluation (IEC 61724-3). Testing methodologies based on technical standards are inherently an open-book approach. Energy models, input assumptions, performance targets and evaluation methods should follow suit. Using intellectual property claims to hide evaluation test methods is a weak argument at best. There is nothing inherently secret about a spreadsheet tool. Our view is that any party at risk during the testing process has a right to review the performance assessment methodology.

Transparency. It is impossible to overstate the importance of transparency. To set up a project for a successful closeout, all project stakeholders need to understand the performance testing process long before testing takes place. Black boxes do not encourage cooperation or help characterize measured performance. Using opaque evaluation methods with propriety module files, meteorological data, inverter models or ac loss models invariably causes problems. If there are no secrets, there are no surprises.

Article Discussion

Related Articles