Deteksi Kerusakan Ban Mobil Menggunakan Convolutional Neural Network dengan Arsitektur ResNet-34

  • Hendri Candra Mayana Teknik Mesin, Politeknik Negeri Padang
  • Desmarita Leni Teknik Mesin, Fakultas Teknik, Universitas Muhammadiyah Sumatera Barat
Keywords: Damage, Car Tires, CNN, Car Tire Damage, ResNet Architecture

Abstract

The examination of tire damage can be categorized as part of vehicle maintenance with the aim of ensuring the tires are in good condition. Visual inspection using human observation has limitations, making it not always accurate and prone to errors in determining tire roadworthiness. This study designs a machine learning model using Convolutional Neural Network (CNN) with a ResNet-34 architecture to detect car tire damage. The parameters used in training this CNN model include the Adam optimizer, a learning rate of 0.0001, batch size 32, and 50 epochs. In this study, there are two predicted image classes: normal tires and damaged tires. The research results indicate that the CNN model with ResNet-34 architecture can predict both classes very well, as evidenced by the model evaluation results with an accuracy of 0.916, precision of 0.907, recall of 0.927, and an F1 score of 0.917. These results suggest that the CNN model with ResNet-34 architecture can be used as an effective tool for inspecting tire damage.

Downloads

Download data is not yet available.

References

National Highway Traffic Safety Administration, et al. (2019). Traffic Safety Facts 2017 Data Older Population.

Kementerian Perhubungan RI. (2015). Analisis Kecelakaan Lalu Lintas Tahun 2015. Jakarta: Direktorat Jenderal Perhubungan Darat.

Sari, L. P. (2020). Analisa Performance Ban Pada Unit Produksi Overburden Hd-785 Terhadap Produktivitas Tambang Batubara. KURVATEK, 5(1), 69-79.

Xu, C., & Zhu, G. (2021). Intelligent manufacturing lie group machine learning: Real-time and efficient inspection system based on fog computing. Journal of Intelligent Manufacturing, 32(1), 237-249.

Oh, S., Cha, J., Kim, D., & Jeong, J. (2020, October). Quality inspection of casting product using CAE and CNN. In 2020 4th International Conference on Imaging, Signal Processing and Communications (ICISPC) (pp. 34-38). IEEE.

Targ, S., Almeida, D., & Lyman, K. (2016). Resnet in resnet: Generalizing residual architectures. arXiv preprint arXiv:1603.08029.

Li, Y., Fan, B., Zhang, W., & Jiang, Z. (2021). TireNet: A high recall rate method for practical application of tire defect type classification. Future Generation Computer Systems, 125, 1-9.

McNeely-White, D., Beveridge, J. R., & Draper, B. A. (2020). Inception and ResNet features are (almost) equivalent. Cognitive Systems Research, 59, 312-318.

Stojek, R., Pastuszczak, A., Wróbel, P., & Kotyński, R. (2022). Single pixel imaging at high pixel resolutions. Optics Express, 30(13), 22730-22745.

Korfiatis, P., Kline, T. L., Lachance, D. H., Parney, I. F., Buckner, J. C., & Erickson, B. J. (2017). Residual deep convolutional neural network predicts MGMT methylation status. Journal of digital imaging, 30, 622-628.

Ilustrasi arsitektur resNet-34, https://www.geeksforgeeks.org/introduction-to-residual-networks/?ref=rp

Mhapsekar, M., Mhapsekar, P., Mhatre, A., & Sawant, V. (2020). Implementation of residual network (ResNet) for devanagari handwritten character recognition. In Advanced Computing Technologies and Applications: Proceedings of 2nd International Conference on Advanced Computing Technologies and Applications—ICACTA 2020 (pp. 137-148). Springer Singapore.

Ibrahim, A., Peter, S., and Yuntong, S. (2018). Comparison of a vertically-averaged and a vertically-resolved model for hyporheic flow beneath a pool-riffle bedform. J. Hydrol. 557, 688–698. doi: 10.1016/j.jhydrol.2017.12.063

Targ, S., Almeida, D., & Lyman, K. (2016). Resnet in resnet: Generalizing residual architectures. arXiv preprint arXiv:1603.08029.

Leni, D., & Yermadona, H. (2023). Pemodelan Inspeksi Kerusakan Ban Mobil Menggunakan Convolutional Neural Network (CNN). Jurnal Rekayasa Material, Manufaktur dan Energi, 6(2), 176-186.

Gupta, A., Gupta, M., & Chaturvedi, P. (2020). Investing Data with Machine Learning Using Python. Strategic System Assurance and Business Analytics, 1-9.

Chan, L., Hosseini, M. S., & Plataniotis, K. N. (2021). A comprehensive analysis of weakly-supervised semantic segmentation in different image domains. International Journal of Computer Vision, 129, 361-384.

Clement, D., Agu, E., Suleiman, M. A., Obayemi, J., Adeshina, S., & Soboyejo, W. (2023). Multi-class breast cancer histopathological image classification using multi-scale pooled image feature representation (mpifr) and one-versus-one support vector machines. Applied Sciences, 13(1), 156.

Zhang, K., Zuo, W., Gu, S., & Zhang, L. (2017). Learning deep CNN denoiser prior for image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3929-3938).

Ma, J., & Yuan, Y. (2019). Dimension reduction of image deep feature using PCA. Journal of Visual Communication and Image Representation, 63, 102578.

Published
2023-12-26
How to Cite
Hendri Candra Mayana, & Desmarita Leni. (2023). Deteksi Kerusakan Ban Mobil Menggunakan Convolutional Neural Network dengan Arsitektur ResNet-34. Jurnal Surya Teknika, 10(2), 842-851. https://doi.org/10.37859/jst.v10i2.6336
Abstract views: 78 , *.pdf downloads: 92