Manual on-site inspection of solar modules requires a huge amount of human resources. The efficiency of a module is determined by the efficiency of all its cells. Those cells degrade over time and may suffer from a dust cover or multiple failures, like cracks or fractures. Automated inspection is used to reduce the time for on-site module inspection. Here, mainly three modalities are used: Electroluminescence (EL) imaging is used by the majority of works [1, 2, 3], while visual images of solar modules are used by Li et al. [4]. Further, Pierdicca et al. Use a dataset that consists of thermal images [5]. In addition to using different modalities, related works also differ by using images taken in a manufacturing setting [1, 2, 3] or from on-site inspection using drones [4, 5]. In this work, we will use a dataset that consists of 691 EL images taken under controlled lab conditions.
In this work, we aim to apply Deep Learning to enable automated inspection of solar modules. Existing research focuses on classification of failures [2-5] or regression of module efficiency [1]. In this work, we aim to join these ideas and consider the classification of failures and prediction of module efficiency as a multi-task learning problem. To this end, we aim to learn an embedding from cell images that can be used for defect classification on cell-level and for power prediction on module-level at the same time. Further, we want to assess, if a very small dimension of the embedding space is suited for power prediction, since we know that the module power is mainly dependent on the fraction of active area per cell. Hence, we hope that reducing the size of the embedding space constrains the problem in a favorable way. Finally, we plan to explore the learned embedding space by visualization and/or correlation with well-known features.
All models will be implemented in Python and PyTorch. The classification task will be done with the ResNet-18 architecture. The ResNet model will be trained with and without transfer learning.
In our dataset, all modules have six rows with ten cells each, resulting in a total of 60 cells per module. These single cell images, 41460 in total, will be used to train the neural networks either with the cell level failure label, the module level power label or both.
Literature
[1] Buerhop-Lutz, Claudia, et al. “Applying Deep Learning Algorithms to EL-images for Predicting the Module Power.” Presented at 36th European Photovoltaic Solar Energy Conference and Exhibition, Marseille 2019.
[2] Deitsch, Sergiu, et al. “Automatic classification of defective photovoltaic module cells in electroluminescence images.” Solar Energy 185 (2019): 455-468.
[3] Sun, Mingjian, et al. “Defect detection of photovoltaic modules based on convolutional neural network.” International Conference on Machine Learning and Intelligent Communications. Springer, Cham, 2017.
[4] Li, Xiaoxia, et al. “Intelligent Fault Pattern Recognition of Aerial Photovoltaic Module Images Based on Deep Learning Technique.” J Syst Cybern Inf 16.2 (2018): 67-71.
[5] Pierdicca, R., et al. “Deep Convolutional Neural Network for Automatic Detection of Damaged Photovoltaic Cells.” International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences 42.2 (2018).