Achieving Commercial Operations in Large-Scale PV Power Systems

The common goal of PV industry stakeholders is to deliver high-quality, reliable energy assets. But do planning and testing methods support this goal?

A PV project’s transition from the construction phase to the operations phase is a flurry of activity. Project stakeholders must coordinate schedules, materials, trades, troubleshooting and testing while adhering to design documents, contractual requirements and project milestones. As such, the sprint to achieve commercial operations is a busy time with many challenges. The shared goal is to get the project to the commercial operations date (COD), the point at which the asset begins to generate revenue.

As independent engineers, we work alongside all of the project stakeholders—owners, financiers, and EPC firms—to help steward large-scale PV projects to the finish line, the COD milestone. We have participated in projects where partners from all trades and disciplines walked away with a profound feeling of satisfaction. We have also seen some unmitigated disasters, which left all project team members frustrated and at significant financial risk.

The worst-case scenario is when a project falls short of its performance test goals and the remedies are not readily apparent. This performance-related impasse is a precarious place to be at the end of a project. The resolution usually takes place at a conference table—or, worse, in a room full of lawyers—and involves discussions of liquidated damages. When projects get to this point, there is little that we as independent engineers can do to solve the problems. This article’s goal is to help you avoid such an impasse.

Here we share lessons learned from our project completion experiences, both good and bad, and our recommendations for a more elegant path to commercial operations, one that starts with the performance-test milestone in mind. While there are many possible paths for getting a project into operation, we frame our discussion around performance testing because this is the last big step before a PV project achieves COD. Our experience is that a collaborative and transparent performance evaluation process that fairly allocates risk delivers high-value PV assets while minimizing conflict and financial risk. While we are not contending that an open project–delivery model eliminates problems, we can certify that it solves problems much faster than more antagonistic approaches.

Performance Testing

The goal of performance testing is to benchmark system performance against a set of contractually mandated performance parameters such as system capacity, efficiency (performance ratio) and energy yield over time to ensure that a PV asset will meet owners’ performance and financial expectations. A successful performance testing process saves time, money and resources. It also provides valuable baseline information for ongoing operations. (See “PV System Energy Performance Evaluations,” SolarPro, October/November 2014.)

Unrealistic expectations—often based on proprietary energy models, weather data files and evaluation tools—are the most common cause of end-of-project delays. For example, we have been involved in projects that stretched performance expectations for every subsystem to their physical limits, tacitly requiring chronic overperformance to achieve a passing evaluation.

Performance testing is especially onerous when the terms and conditions effectively require that all subsystems must perform at or above expected efficiency; module capacity must exceed nameplate power ratings; modules must be perfectly clean for the duration of the test; dc, ac, inverter and transformer losses must be at or below expected levels; and, most problematically, all measurements must be perfectly representative and accurate with no uncertainty. These requirements are not an exaggeration, but rather an example of what happens when one party dictates all contractual testing and completion terms.

The scramble to meet a nearly unattainable goal is unbelievably expensive. One-sided terms are a setup for disappointment and contribute to an antagonistic project delivery model that we believe is both counterproductive and avoidable. Unreasonable or unattainable goals do not improve system performance.

CRITICAL NEGOTIATIONS

Having sat on all sides of the negotiating table when COD was looming, we are strong proponents of an open performance-evaluation model based on mutually agreed upon expectations and a reasonable assignment of risk. It is possible and, indeed, preferable to navigate commissioning, start-up, testing and project completion in a way that is acceptable to all interested parties; that facilitates and expedites final payments; and, most important, that provides a detailed characterization of expected plant behavior. A process built around mutual agreement and consent best serves this outcome.

Once you have assembled a project team, it is critically important for stakeholders to engage in a candid discussion of performance test methods, objectives and constraints. These early planning decisions will guide the team members during project development and construction through the COD milestone. The below topics always come up during the project testing phase and invariably cause problems when team members have conflicting expectations. We recommend discussing these subjects at project inception, establishing clear rules and contractual definitions, and revisiting the plan often.

Testing model. It is essential for team members to develop an energy model specifically for the performance test. The testing model will be similar to the accepted annual energy model, but it will be tuned to reflect the expected conditions at the time of testing. Develop a testing model that reflects contractual obligations above all else, meaning that contract language and terms should inform the modeling assumptions and performance risk allocations. The testing model must be dynamic and able to adapt to changes in design, implementation, testing methods and site conditions.

Uncertainty. All operational measurements have uncertainty, and the performance testing process must acknowledge this fact. Ignoring or negating uncertainty fails to allocate risk equitably. The argument that measurement uncertainty “can go either way” only applies if the installing contractor is contractually incentivized for performance in excess of 100%. As a starting point, we recommend estimating measurement uncertainty at 2%. Team members can revise this value after finalizing equipment selection and completing the performance test plan.

Module output. Assign the risk associated with increased nameplate power ratings to whichever party buys the PV modules. If the installing contractor procures the modules, then it can dictate how much positive power tolerance it will backstop. If the owner buys the modules, the installing contractor has no recourse in the event that the project does not realize an expected increase in power; in this scenario, it may not be appropriate to include assumptions of positive power tolerance in the performance evaluation model.

Soiling. The possibility of zero percent soiling is a myth, especially in the context of long-duration performance tests. Contracts for performance testing must include a soiling allowance in some form, through either direct measurement at the time of testing or a reasonable estimate based on the wash cycle prior to testing. Reliably assessing soiling at the time of testing dramatically improves troubleshooting efforts and investigations of performance shortfalls. (See “Soiling Assessment in Large-Scale PV Arrays,” SolarPro, November/December 2016.)

Loss models. AC loss, dc loss, transformer efficiency and inverter efficiency assumptions mature over time. Any model used for performance evaluation must evolve as the team better quantifies these values through design, equipment selection and installation. Equipment test sheets, particularly for transformers, are a good source of the data. When modeled and measured quantities diverge during testing, you can usually trace the root cause back to unrevised model assumptions that made their way to the testing phase.

Test methods. We strongly recommend using unmodified, standard test methods and shared evaluation tools. For example, the American Society of Testing and Materials (ASTM) has published a PV performance test standard (ASTM E2848-13) and the International Electrotechnical Commission (IEC) has published a suite of technical standards for PV system performance monitoring (IEC 61724-1), capacity testing (IEC 61724-2) and energy yield evaluation (IEC 61724-3). Testing methodologies based on technical standards are inherently an open-book approach. Energy models, input assumptions, performance targets and evaluation methods should follow suit. Using intellectual property claims to hide evaluation test methods is a weak argument at best. There is nothing inherently secret about a spreadsheet tool. Our view is that any party at risk during the testing process has a right to review the performance assessment methodology.

Transparency. It is impossible to overstate the importance of transparency. To set up a project for a successful closeout, all project stakeholders need to understand the performance testing process long before testing takes place. Black boxes do not encourage cooperation or help characterize measured performance. Using opaque evaluation methods with propriety module files, meteorological data, inverter models or ac loss models invariably causes problems. If there are no secrets, there are no surprises.

The following are guidelines to help ensure an equitable performance test process, free of misunderstandings, that enables ongoing operations:

  • Create and maintain a project closeout team that can meet as necessary during testing to solve immediate problems. The team should consist of knowledgeable members representing the EPC team, supervisory control and data acquisition (SCADA) integrator, owner, owner’s and EPC’s engineers, inverter vendor and tracker provider (if applicable).
  • Ensure that project team members have access to any information relevant to system commissioning, including array test reports, inverter burn-in test results, module flash test data and manufacturer start-up test reports.
  • Use a dedicated performance testing energy model that represents site and plant conditions at the time of testing and incorporates simulation assumptions and parameters mutually derived by the owner, builder and performance engineering personnel.
  • Share all the testing model inputs and outputs—including module files (.pan), inverter files (.ond), shading files (.shd), meteorological files (.met), hourly output data files (8,760 exports) and test target derivations (typically spreadsheets)—and performance test evaluation methods.
  • Establish standard testing data downloads that all stakeholders can access.

While this degree of transparency is a departure from convention, we have found that it really works. Sharing the means and methods for testing essentially enlists a team of troubleshooters—an extremely valuable tool—to expedite test and project completion. In the words of one owner: “When we all work together, we have fewer fingers pointing and more fingers fixing.”

TEST PREPARATION

Planning for commercial operation starts with a thorough understanding of the contract and performance test requirements. These requirements inform the strategy that project stakeholders use to prepare project documentation, evaluate SCADA requirements, specify and install measurement devices and validate sensors. To the extent that the leadership team understands the deliverables in advance, it can have all the documentation and requirements ready for the field team. This guidance ensures that the project team installs the system correctly the first time and accurately documents key information in the process.

Test implementation starts in the back office, with the procurement of the data acquisition system and measurement devices, and continues in the field as the project nears mechanical completion. These general steps in the process can easily mature into a working, dynamic checklist.

Precommissioning. During precommissioning, assemble a dedicated team, representing all the relevant project stakeholders, to lead the performance testing process. As a team, generate the documents needed for performance testing; create a testing model that is separate from the yearly model; and determine the plant-testing configuration, reporting conditions and targets. Next, review the SCADA and sensor installation plans and specifications to make sure these meet the requirements of the performance test standard. Verify the data collection rate and list of data points for the test. Coordinate with the field crew to document inverter and subarray mapping, and validate input channel labeling and reporting.

Start-up and commissioning. Since the activities in this step start the countdown to project completion, it is important to coordinate with all the stakeholders and set the dates and schedule for performance testing. At start-up, commission and validate the SCADA system and sensor accuracy. Next, troubleshoot the inverters and field wiring. Conclude with a final commissioning to close out any punch-list items, run practice tests, validate performance evaluation tools and verify data streams.

Performance testing. Once everything is working, the project team can determine the start and stop times for the performance test. As data come in, run the analysis, disseminate the data sets, compare evaluations and determine test results. Given decent weather, a transparent process and reasonable parties, you will obtain definitive results: The project will pass or the cause of failure will be clear, and the team can try again after fixing the problem.

Project finalizing. Once the plant passes, the team can finalize the project test results, document the process, develop a baseline performance model for the plant and assemble a final punch list for completion. Organized and meticulous documentation at this stage is critical if the project is to achieve sign-off for commercial operations. This documentation also provides the site records that the owners, asset managers, system auditors and operations teams will rely on in the years to come. Most important, good documentation saves everyone time and money.

DOCUMENTATION

High-quality documentation facilitates future transactions and forms the foundation for successful operations. With a standards-based performance test process, end-of-project documentation provides a baseline for benchmarking system performance against other assets in an owner’s portfolio, informs the operations and maintenance bid, and serves as a starting point for the plant evaluation documentation required when the asset is sold. Think of the standards-based performance test documentation as a factory acceptance test certificate for a fielded PV power plant. Without proper documentation, the asset is more difficult to maintain and sell for a high price because there is no proof that the site performs as expected.

At the precommissioning stage, it is useful to create a commissioning folder prepopulated with relevant forms and lists of required information. As the project approaches completion, this folder becomes a central repository for all of the documents and data that the project team will pass on to the owner and operations team. At project closeout, this folder should include the following:

  • Contracts and addenda related to the performance test
  • Test model, including descriptions of inputs, all assumptions and detailed output
  • Performance test technical standards
  • Performance test workbook with open-source evaluation methods and formulas
  • Combiner box as-builts identifying string counts, physical locations and names
  • Detailed map of inverters, combiners and current measurement channels
  • Datasheets and calibration certificates for all equipment
  • Plans and documents required for correct sensor installation
  • SCADA platform permissions and log-in information
  • Functional testing checklist and test results
  • Mechanical completion certification and substantial completion forms
  • Form for permission to operate, as well as other COD forms and requirements

Knowing what deliverables you need at project closeout is crucial to identifying and collecting the information and documentation for each successive step. Anyone who has gone through project closeout knows that proper documentation is conducive to a smooth and orderly process, whereas incomplete documentation results in a scrambling series of fire drills that waste time and resources.

Identifying string outages, for example, is a labor-intensive process unless you have accurately mapped the path of the combiner box wires to the inverter input channels. If you do not properly identify and map data points in the SCADA system, operations personnel cannot use the monitoring system to identify missing string inputs remotely. To obtain this information before energizing the plant, the project team needs to ensure that field personnel fill out forms documenting as-built field wiring conditions, and then pass the completed forms on to the SCADA vendor. If the team fails to do this work in advance, technicians can waste an entire day in the field as they will have to shut down each inverter in succession to document the wiring.

It is important to assign meaningful sensor names to aid with troubleshooting activities in both the near term and the future. Proper documentation also extends to naming conventions in the SCADA interface, as well as the labels inside equipment boxes. Since underperformance investigations typically start in the SCADA portal and lead to the field, we recommend assigning descriptors that identify the inverter, combiner box and string count. With a standard naming convention in place, performance analysts and service technicians can look at a label such as “02-04 [22]” and know immediately that there are 22 strings on combiner box 4 of inverter 2. This encoded information is useful for repairing problems or identifying any changes in field conditions after commissioning.

NO DATA, NO DICE

Proper planning, installation, commissioning and validation of the SCADA system and its meteorological sensors are essential to bringing a project to a successful close. Early in project development, the project team must discuss SCADA specifications and the associated design and installation details. Gathering this information cannot be an afterthought, as data acquisition is the single most decisive factor in the performance test outcome, pass or fail. To close the project out, the SCADA system needs to not only meet utility requirements, but also fulfill any contractual obligations related to performance testing. (See “SCADA Systems for Large-Scale PV Plants,” SolarPro, May/June 2017.)

The SCADA vendor needs to field the right gear and specify the proper data collection rate. To get meaningful values, the system should have the ability to roll up multiple data points per test interval. For example, if the performance test calls for 1-minute data sets, then the data-polling rate might be set for 5 seconds. Coincident measurement is key to accurate, high-resolution performance analysis; the faster the data collection rate, the more measurement coincidence matters. Some SCADA systems do not sync time stamps to a network clock or GPS, thereby calling measurement simultaneity into question. SCADA providers must be aware of these types of test requirements, both explicit and implicit.

The details matter most when it comes to installing the monitoring system so that it properly reports field conditions. A correctly installed system allows remote troubleshooting and expedites the process of identifying and resolving problems. At the end of a project, this efficiency can save a lot of labor costs and shorten the schedule. More important, the system will produce data that represent the installed system, allowing it to pass the performance test with indicative results.

Performance assessment, whether at the time of testing or in operation, directly depends on reliable, accurate measurements of primary data. The team must install sensors correctly and validate, cross-check and correctly map them in the SCADA. The testing protocol will dictate the primary measurement sensors, which typically include irradiance sensors, power meters and temperature sensors.

Irradiance sensors. Pyranometer installation errors are the most common cause of perceived performance problems. As an example, imagine a north-sloping tracking array that uses plane-of-array (POA) irradiance sensors leveled to a horizontal axis rather than aligned to the axis of the modules. In this scenario, the array is canted away from the sun, but the POA sensor is not. As a result, the performance test results will tend to incorrectly indicate that the system is underperforming, especially if the energy model assumes a perfectly flat site. While the energy model and sensor placement will match up well, neither will match the as-built condition. This seemingly small difference in modeled and measured conditions will likely result in an inaccurate evaluation of the system as underperforming.

Energy test results are especially sensitive to irradiance data. If you do not install a POA irradiance sensor at the same angle as the array, the resulting measurements will not accurately reflect module orientation. Similarly, if you do not make sure the bubble level is centered in the level window on a global horizontal irradiance (GHI) sensor, the accuracy of these data will suffer. It is easy to overlook small alignment issues, but they can have a large impact. Misaligned pyranometers are sometimes the source of hard-to-diagnose errors that can lead to performance test failures. Fortunately, a field team can easily identify and correct these problems using digital levels and careful measurements to properly adjust sensors.

POA irradiance, because it directly affects output power, is perhaps the single most important parameter to verify and capture as accurately as possible. For best results, irradiance sensors must be stable, firmly mounted and easy to adjust. A great way to accomplish this is to mount pyranometers to a rigid object using an adjustable bracket. After verifying that you have firmly mounted the pyranometer to the bracket, you can make final adjustments to the sensor orientation. Multi-direction adjustable brackets with leveling screws make it easy to perform fine adjustments quickly, which not only reduces labor costs but also improves job site safety by minimizing the time a technician spends on a ladder.

Because irradiance measurements are so important, we suggest installing at least two sensors for redundancy and using additional sensors as appropriate in larger systems. In addition to providing redundant data streams, the extra sensors allow project stakeholders to compare measurements from multiple sensors, which will either improve confidence in the values or identify possible outliers. With single-axis trackers, it is particularly important to verify that the POA and GHI sensors agree at solar noon. Validating this one item answers three important quality assurance questions: Is the SCADA system scaling the POA and GHI measurements correctly? Are the POA sensors installed correctly? Is the tracker functioning properly and at the right angle (0°) at solar noon?

Power measurements. While testers typically assume that utility meters, check meters and inverter output data are accurate, that is not necessarily the case. To ensure appropriate readings, it is important to understand power measurement accuracy parameters and multi-measurement accumulation (roll-up) methods, as well as validate meter programming. If the SCADA provider has not worked with a particular meter before, ask its team to exercise due diligence in advance so it does not waste time in the field when the clock is ticking.

Temperature sensors. While temperature measurements tend to be accurate, their use in performance evaluation is tricky. It is important to select temperature sensors and placement locations that capture measurements representative of the array at large or a specific subset thereof. Unrepresentative measurements will skew performance evaluation results, in some cases significantly.

Ambient temperature measurements tend to be very accurate and reliable if the team takes care to install the sensors correctly. By comparison, back-of-module (BOM) temperature measurements do a poor job of representing the entire array. The attachment method, sensor location on the module and module location within the array all affect temperature measurements taken on the back of a module.

Under most environmental conditions, BOM temperature measurements do not represent the array at large. As a result, you must translate these values to derive a cell temperature value, modified by a reported ΔT (temperature difference) condition; translate them again to derive thermal loss or gain based on documented module performance parameters; and, finally, extrapolate them to an effective output power value. Each step in this process introduces uncertainty and room for error.

Thermal loss models for PV arrays based on BOM temperature measurements are in no way mature enough for teams to use for performance testing evaluations where a tenth of a percent difference translates to hundreds of thousands of dollars. While some independent engineers, developers and owners still ask for BOM measurements, you should avoid performance evaluations that use BOM temperature as a primary measurement. To lower uncertainty and reduce the complexity of data acquisition and analysis, the working groups responsible for performance test standards such as “ASTM 2848-13: Standard Test Method for Reporting Photovoltaic Non-Concentrator System Performance” have written BOM measurements out in favor of ambient temperature measurements.

In the event that contract terms require the project team to use cell temperature values derived from BOM sensors as the basis for performance testing, team members need to have a detailed discussion about the associated risks and implications. It is certainly possible to address the uncertainty this practice builds into the test protocol. The project team just needs to make sure to do so, as this is sometimes overlooked.

Strategies for Success

While we recognize that project team members may need to deviate from their conventional delivery models to accommodate the performance testing means and methods we have described here, our experience shows that such deviation is both necessary and beneficial. Although the solar industry has rigorously optimized system design, engineering, procurement and installation via iteration and continuous improvement, the performance testing process remains relatively immature and is ripe for development.

We have based our perspective on project closeout on the testing and commissioning problems our clients have encountered in the real world, as well as on our (sometimes limited) ability to identify root causes and solutions. While contract closeout problems are unpredictable and usually very complex, our experience is that hard work and high-quality data analysis can solve most of these issues. With adequate monitoring and commissioning documentation, a project team can investigate, diagnose and correct the majority of performance shortfalls within the timeframe of the project schedule. The following is a summary of project closeout practices that have worked well for us.

Say no to secrets. We strongly advocate an open and transparent project delivery model, in which the team shares energy models, design documents, testing target, evaluation methods and commissioning reports. While this approach may not be right for every team, all parties need to recognize that any insistence on secrecy or confidentiality introduces risk. Secrecy impedes troubleshooting. It is a risk to propose keeping other stakeholders in the dark or to accept this secrecy. A transparent and collaborative testing and closeout process works well precisely because it allows the team to find solutions more quickly.

Centralize data. Create a central repository for all relevant project information. This data center should contain any resources that affect, inform or influence performance testing and project closeout. As a general rule, the data center should contain all the information a completely uninformed third party would need to validate or conduct performance testing from scratch without help. This information archive should include commissioning data, testing model, target results with derivations, test evaluation tools, unrestricted access to operational data downloads and backup data for troubleshooting.

Establish a tiger team. Assemble a group of smart people from multiple disciplines to shepherd the project from inception to closeout. The configuration of this team will evolve as the project matures. At the time of performance testing, the closeout team should include agents representing the owner, EPC firm, SCADA provider, inverter supplier, independent engineering providers, design engineering team and party responsible for energy modeling. This team of experts will meet on an as-needed basis during the project development process and on a daily basis during the big push to complete performance testing. Membership  continuity is critical to the team’s success. It is also important to keep all team members fully informed at every step of  the process.

Ready triage teams. Closeout team members must assure they have adequate backup resources available for problem-solving and troubleshooting. It is especially important to have a backup squad available during the run-up to the performance tests, as projects approaching COD cannot wait for a given vendor to assemble an ad hoc squad to solve problems. It is the direct responsibility of each closeout team member to ensure that he or she has the right engineers, programmers or field personnel available at the time of the performance evaluation test.

Conduct preliminary tests. While project schedules often omit this step, it is critical for success. Come test time, everything has to work reliably, accurately and simultaneously. A single stakeholder running behind schedule will delay the test schedule. One wayward sensor will jeopardize the accuracy of the performance evaluation. The failure of any major component will invalidate the test results. These examples illustrate why it is essential to have triage teams at the ready. Conducting a preliminary test run—or a series of runs, if necessary—can potentially save weeks by obviating test period extensions. Preliminary test runs eliminate nuisance problems, provide a forum for multi-disciplinary validation of system operation and significantly speed up the formal testing process.

Keep all eyes on the prize. At the time of testing, the closeout team should meet every day to evaluate preliminary test results, troubleshoot problems and validate operational information. Problems are easy to identify and solve when you make data sets available to all participants, who bring different points of view to bear on the issue. This is the greatest advantage of the process and the most useful part of the open approach to testing. When you have a team of experts dedicated to making a system work, amazing things happen.

Strive for consensus. Those who are used to more-hierarchical methods of project delivery sometimes deride consensus methods as “group therapy.” Our response is simple: What is wrong with group therapy? We all know that things can and do go wrong. Some schedules will slip. Some system will underperform. Some liquidated damages will require negotiation. But these risks are independent of delivery method. The thing we should be concerned about is how we are going to work through these problems. If we all work together, we can fix problems faster, and we can all take pride in a job well done. The overarching goal—and the likely end result—of the open project–delivery process is a shared sense of accomplishment when the project reaches COD.

With mutually agreed upon assumptions, models and test methods, each team participant can revisit individual processes based on testing outcomes. The owner and developer can apply results to future projects and adjust business models accordingly. EPC teams can perform subsystem analyses to better predict under- or overperforming systems. Independent engineers can review and analyze reliable data sets. Regardless of any individual outcome, the information gathered from an open testing process is valuable for everyone involved, especially future owners and operators. There is no better foundation for long-term viability than an asset that is fully documented and complete when it enters commercial operations.

CONTACT:

Anastasios Hionis, PE / PV AMPS / Sacramento, CA / pvamps.com

Mat Taylor / PV AMPS / Paradise, CA / pvamps.com

Article Discussion

Related Articles