Achieving Commercial Operations in Large-Scale PV Power Systems: Page 5 of 6
Inside this Article
The SCADA vendor needs to field the right gear and specify the proper data collection rate. To get meaningful values, the system should have the ability to roll up multiple data points per test interval. For example, if the performance test calls for 1-minute data sets, then the data-polling rate might be set for 5 seconds. Coincident measurement is key to accurate, high-resolution performance analysis; the faster the data collection rate, the more measurement coincidence matters. Some SCADA systems do not sync time stamps to a network clock or GPS, thereby calling measurement simultaneity into question. SCADA providers must be aware of these types of test requirements, both explicit and implicit.
The details matter most when it comes to installing the monitoring system so that it properly reports field conditions. A correctly installed system allows remote troubleshooting and expedites the process of identifying and resolving problems. At the end of a project, this efficiency can save a lot of labor costs and shorten the schedule. More important, the system will produce data that represent the installed system, allowing it to pass the performance test with indicative results.
Performance assessment, whether at the time of testing or in operation, directly depends on reliable, accurate measurements of primary data. The team must install sensors correctly and validate, cross-check and correctly map them in the SCADA. The testing protocol will dictate the primary measurement sensors, which typically include irradiance sensors, power meters and temperature sensors.
Irradiance sensors. Pyranometer installation errors are the most common cause of perceived performance problems. As an example, imagine a north-sloping tracking array that uses plane-of-array (POA) irradiance sensors leveled to a horizontal axis rather than aligned to the axis of the modules. In this scenario, the array is canted away from the sun, but the POA sensor is not. As a result, the performance test results will tend to incorrectly indicate that the system is underperforming, especially if the energy model assumes a perfectly flat site. While the energy model and sensor placement will match up well, neither will match the as-built condition. This seemingly small difference in modeled and measured conditions will likely result in an inaccurate evaluation of the system as underperforming.
Energy test results are especially sensitive to irradiance data. If you do not install a POA irradiance sensor at the same angle as the array, the resulting measurements will not accurately reflect module orientation. Similarly, if you do not make sure the bubble level is centered in the level window on a global horizontal irradiance (GHI) sensor, the accuracy of these data will suffer. It is easy to overlook small alignment issues, but they can have a large impact. Misaligned pyranometers are sometimes the source of hard-to-diagnose errors that can lead to performance test failures. Fortunately, a field team can easily identify and correct these problems using digital levels and careful measurements to properly adjust sensors.
POA irradiance, because it directly affects output power, is perhaps the single most important parameter to verify and capture as accurately as possible. For best results, irradiance sensors must be stable, firmly mounted and easy to adjust. A great way to accomplish this is to mount pyranometers to a rigid object using an adjustable bracket. After verifying that you have firmly mounted the pyranometer to the bracket, you can make final adjustments to the sensor orientation. Multi-direction adjustable brackets with leveling screws make it easy to perform fine adjustments quickly, which not only reduces labor costs but also improves job site safety by minimizing the time a technician spends on a ladder.
Because irradiance measurements are so important, we suggest installing at least two sensors for redundancy and using additional sensors as appropriate in larger systems. In addition to providing redundant data streams, the extra sensors allow project stakeholders to compare measurements from multiple sensors, which will either improve confidence in the values or identify possible outliers. With single-axis trackers, it is particularly important to verify that the POA and GHI sensors agree at solar noon. Validating this one item answers three important quality assurance questions: Is the SCADA system scaling the POA and GHI measurements correctly? Are the POA sensors installed correctly? Is the tracker functioning properly and at the right angle (0°) at solar noon?
Power measurements. While testers typically assume that utility meters, check meters and inverter output data are accurate, that is not necessarily the case. To ensure appropriate readings, it is important to understand power measurement accuracy parameters and multi-measurement accumulation (roll-up) methods, as well as validate meter programming. If the SCADA provider has not worked with a particular meter before, ask its team to exercise due diligence in advance so it does not waste time in the field when the clock is ticking.
Temperature sensors. While temperature measurements tend to be accurate, their use in performance evaluation is tricky. It is important to select temperature sensors and placement locations that capture measurements representative of the array at large or a specific subset thereof. Unrepresentative measurements will skew performance evaluation results, in some cases significantly.
Ambient temperature measurements tend to be very accurate and reliable if the team takes care to install the sensors correctly. By comparison, back-of-module (BOM) temperature measurements do a poor job of representing the entire array. The attachment method, sensor location on the module and module location within the array all affect temperature measurements taken on the back of a module.
Under most environmental conditions, BOM temperature measurements do not represent the array at large. As a result, you must translate these values to derive a cell temperature value, modified by a reported ΔT (temperature difference) condition; translate them again to derive thermal loss or gain based on documented module performance parameters; and, finally, extrapolate them to an effective output power value. Each step in this process introduces uncertainty and room for error.
Thermal loss models for PV arrays based on BOM temperature measurements are in no way mature enough for teams to use for performance testing evaluations where a tenth of a percent difference translates to hundreds of thousands of dollars. While some independent engineers, developers and owners still ask for BOM measurements, you should avoid performance evaluations that use BOM temperature as a primary measurement. To lower uncertainty and reduce the complexity of data acquisition and analysis, the working groups responsible for performance test standards such as “ASTM 2848-13: Standard Test Method for Reporting Photovoltaic Non-Concentrator System Performance” have written BOM measurements out in favor of ambient temperature measurements.
In the event that contract terms require the project team to use cell temperature values derived from BOM sensors as the basis for performance testing, team members need to have a detailed discussion about the associated risks and implications. It is certainly possible to address the uncertainty this practice builds into the test protocol. The project team just needs to make sure to do so, as this is sometimes overlooked.