Klasifikasi Buah dan Sayuran Multi-label Menggunakan CNN: Mengatasi Class Imbalance Dengan Focal Loss
DOI:
https://doi.org/10.37859/coscitech.v6i3.10116
Abstract
Investigates the effectiveness of Focal Loss as a solution to the problem of class imbalance in multi-label fruit and vegetable classification tasks. Using a ResNet50-based Convolutional Neural Network (CNN) architecture, two models were trained and evaluated: one using Focal Loss and another using Binary Cross-Entropy (BCE) Loss as a baseline. To address the availability of multi-label datasets, a synthetic multi-label dataset was created by combining images from existing single-label datasets. Experimental results show that the model trained with Focal Loss achieved an accuracy of 0.9390 and an F1-score of 0.9863, outperforming the BCE Loss model which only reached an accuracy of 0.8850 and an F1-score of 0.9718. The comparative analysis indicates that Focal Loss, with its ability to focus the training process on difficult examples, effectively addresses class imbalance and produces superior performance. This study concludes that Focal Loss is an effective tool for multi-label classification tasks and highlights the existing limitations, including the synthetic nature of the dataset and the limited training duration, which underscore the need for further research
Downloads
References
DU, Jie, et al. Parameter-Free Loss for Class-Imbalanced Deep Learning in Image Classification. IEEE Transactions on Neural Networks and Learning Systems, 2021, 34: 3234-3240. https://doi.org/10.1109/TNNLS.2021.3110885
YUE, Guanghui, et al. Automated Endoscopic Image Classification via Deep Neural Network With Class Imbalance Loss. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 1-11. https://doi.org/10.1109/TIM.2023.3264047
LIU, Yang, et al. Automatic Multi-Label ECG Classification with Category Imbalance and Cost-Sensitive Thresholding. Biosensors, 2021, 11. https://doi.org/10.3390/bios11110453
WEI, Yunchao, et al. HCP: A Flexible CNN Framework for Multi-Label Image Classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 38: 1901-1907. https://doi.org/10.1109/TPAMI.2015.2491929
NIE, Yali, et al. A Deep CNN Transformer Hybrid Model for Skin Lesion Classification of Dermoscopic Images Using Focal Loss. Diagnostics, 2022, 13. https://doi.org/10.3390/diagnostics13010072
JI, Junchao, et al. Multi-Label Remote Sensing Image Classification with Latent Semantic Dependencies. Remote. Sens., 2020, 12: 1110. https://doi.org/10.3390/rs12071110
HANIF, M., et al. Enhancing Multi-Label Chest X-Ray Classification Using an Improved Ranking Loss. Bioengineering, 2025, 12. https://doi.org/10.3390/bioengineering12060593
ZHANG, Juce, et al. Lightweight and Optimized Multi-Label Fruit Image Classification: A Combined Approach of Knowledge Distillation and Image Enhancement. Electronics, 2024. https://doi.org/10.3390/electronics13163267
YU, Zeping; ZHANG, Min-Ling. Multi-Label Classification With Label-Specific Feature Generation: A Wrapped Approach. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 44: 5199-5210. https://doi.org/10.1109/TPAMI.2021.3070215
LIU, Chengliang, et al. Reliable Representation Learning for Incomplete Multi-View Missing Multi-Label Classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 47: 4940-4956. https://doi.org/10.1109/TPAMI.2025.3546356
LIU, Zhi-Feng, et al. Robust Label and Feature Space Co-Learning for Multi-Label Classification. IEEE Transactions on Knowledge and Data Engineering, 2023, 35: 11846-11859. https://doi.org/10.1109/TKDE.2022.3232114
KE, Xiao; ZOU, Jiawei; NIU, Yuzhen. End-to-End Automatic Image Annotation Based on Deep CNN and Multi-Label Data Augmentation. IEEE Transactions on Multimedia, 2019, 21: 2093-2106. https://doi.org/10.1109/TMM.2019.2895511
YAO, Yi‐qun, et al. A Dual-branch Learning Model with Gradient-balanced Loss for Long-tailed Multi-label Text Classification. ACM Transactions on Information Systems, 2023, 42: 1-24. https://doi.org/10.1145/3597416










