JURNAL FASILKOM
https://ejurnal.umri.ac.id/index.php/JIK
<p><span class="">Jurnal <strong>FASILKOM (teknologi inFormASi dan ILmu KOMputer)</strong></span> is a Double Blind peer-review Journal dedicated for the publication of a qualified research results in a scope of Information Technology. The journal releases periodically 3 times a year on April, August, and December. all the published article in<strong> jurnal FASILKOM (teknologi inFormASi dan ILmu KOMputer) are open for access, which allows the article accessible for free online without subscription. </strong></p>Unversitas Muhammadiyah Riauen-USJURNAL FASILKOM2089-3353<p><strong>Copyright Notice</strong></p> <p>An author who publishes in the Jurnal FASILKOM (teknologi inFormASi dan ILmu KOMputer) agrees to the following terms:</p> <ul> <li class="show">Author retains the copyright and grants the journal the right of first publication of the work simultaneously licensed under the Creative Commons Attribution-ShareAlike 4.0 License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal</li> <li class="show">Author is able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book) with the acknowledgement of its initial publication in this journal.</li> <li class="show">Author is permitted and encouraged to post his/her work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of the published work (See <a href="http://opcit.eprints.org/oacitation-biblio.html">The Effect of Open Access</a>).</li> </ul> <p>Read more about the Creative Commons Attribution-ShareAlike 4.0 Licence here: <a href="https://creativecommons.org/licenses/by-sa/4.0/">https://creativecommons.org/licenses/by-sa/4.0/</a>.</p>Penerapan Dekomposisi Matriks untuk Reduksi Kompleksitas Komputasi pada Algoritma Machine Learning
https://ejurnal.umri.ac.id/index.php/JIK/article/view/11166
<p><em>The increasing complexity of machine learning algorithms is often accompanied by higher computational costs particularly when dealing with high-dimensional data. This condition poses significant challenges in terms of computational efficiency and resource utilization. One mathematical approach that can address this issue is the application of linear algebra concepts, specifically matrix decomposition techniques. This study aims to apply matrix decomposition methods to reduce computational complexity in machine learning algorithms without significantly degrading model performance. The proposed approach employs matrix decomposition, such as Singular Value Decomposition (SVD), during the data preprocessing and model training stages. The performance of the algorithms is evaluated by comparing their behavior before and after the application of matrix decomposition in terms of computational time, accuracy, and memory efficiency. The experimental results demonstrate that matrix decomposition can significantly reduce computational complexity and improve learning efficiency, while maintaining stable or only slightly reduced accuracy. These findings indicate that matrix decomposition is an effective and practical approach for optimizing machine learning algorithms, particularly for large-scale and high-dimensional datasets.</em></p>Safira Hasna SetiyaniYusiana Rahma
Copyright (c) 2026 Safira Hasna Setiyani, Yusiana Rahma
https://creativecommons.org/licenses/by-sa/4.0
2026-04-302026-04-30161798510.37859/jf.v16i1.11166Pemodelan Deteksi dan Klasifikasi Fraktur Tulang pada Radiografi X-Ray Menggunakan YOLOv8 dan Preprocessing CLAHE
https://ejurnal.umri.ac.id/index.php/JIK/article/view/11241
<p><em>This study aims to develop a model for detecting and classifying bone fractures in digital X-ray radiography images using the You Only Look Once version 8 (YOLOv8) architecture with the application of Contrast Limited Adaptive Histogram Equalization (CLAHE) as a preprocessing method. The CLAHE method is used to improve contrast quality and clarify bone structure details, thereby facilitating the feature extraction process by the detection model. The research dataset comprises 641 X-ray and MRI images divided into ten classes consisting of various types of bone fractures, namely Comminuted, Greenstick, Linear, Oblique, Oblique Displaced, Segmental, Spiral, Transverse, and Transverse Displaced, as well as the Healthy class as a comparison. Model training was conducted for 100 epochs using YOLOv8n with CLAHE-based augmentation to improve the visibility of the fracture area. The best results were obtained from the YOLOv8-CLAHE (balanced) model with a mAP@0.5 of 0.933 to 0.941, precision of 0.939 to 0.965, and recall of 0.877 to 0.901. The Segmental and Comminuted classes showed the highest performance, while classes with limited data such as Greenstick and Linear still had relatively low accuracy. The model's inference speed reached 8.3 milliseconds per image, demonstrating the potential application of this system for real-time fracture detection in clinical settings. The results of this study show that the application of the CLAHE method in the image pre-processing stage can improve the detection and classification performance of YOLOv8, and has the potential to support the development of automated diagnosis systems in the field of orthopedic radiology.</em></p>JOSE JULIAN HIDAYATAbdul Halim AnshorM. Syaibani Anwar
Copyright (c) 2026 JOSE JULIAN HIDAYAT, Abdul Halim Anshor, M. Syaibani Anwar
https://creativecommons.org/licenses/by-sa/4.0
2026-04-302026-04-30161314510.37859/jf.v16i1.11241Prediksi Lead Scoring untuk Optimasi Penjualan Menggunakan Random Forest dan Teknik SMOTE
https://ejurnal.umri.ac.id/index.php/JIK/article/view/11292
<p><em>Accurate lead scoring systems have become a strategic necessity for organizations operating in data-driven marketing environments, as they enable systematic identification of high-value customer prospects to maximize sales conversion efficiency. A fundamental challenge confronting conventional classification models is the class imbalance inherent in real-world marketing data, which induces majority-class bias and substantially reduces sensitivity toward minority-class prospects. This study proposes a Random Forest (RF)-based lead scoring prediction model integrated with the Synthetic Minority Over-sampling Technique (SMOTE) to address this limitation systematically. The dataset employed is the Lead Scoring Dataset from Kaggle, comprising 9,240 customer prospect records from an educational company with a class imbalance ratio of 1.59:1. Preprocessing included missing value treatment, removal of attributes exceeding 40% data loss, mode-based imputation, and categorical feature encoding. Following an 80:20 stratified split, SMOTE was applied exclusively to the training set to produce a balanced class distribution and prevent data leakage. The RF model was configured with n_estimators = 100, max_features = 'sqrt', and class_weight = 'balanced'. The proposed RF+SMOTE model achieved accuracy of 88.80%, precision of 86.44%, recall of 84.13%, F1-Score of 85.27%, and AUC-ROC of 0.9453, outperforming the baseline across four of five evaluation metrics. The most notable improvement was observed in recall, with a gain of 1.26 percentage points. Stratified 5-Fold Cross-Validation confirmed robust generalization capability, with AUC-ROC values consistently ranging between 94% and 95%. These findings demonstrate that the hybrid RF+SMOTE approach effectively enhances high-potential prospect detection while maintaining overall model stability for real-world Customer Relationship Management (CRM) deployment.</em></p>DAFFA PRATAMA PUTRADimas Agil KusumaM. Rizki Al AkbarAli IbrahimFathoni Fathoni
Copyright (c) 2026 DAFFA PRATAMA PUTRA, Dimas Agil Kusuma, M. Rizki Al Akbar, Ali Ibrahim, Fathoni Fathoni
https://creativecommons.org/licenses/by-sa/4.0
2026-04-302026-04-3016112012610.37859/jf.v16i1.11292Analisis Pemerataan Pendidikan di Indonesia Menggunakan Reduksi Dimensi PCA dan Klasterisasi K-Means
https://ejurnal.umri.ac.id/index.php/JIK/article/view/11349
<p><em>Educational equity in Indonesia continues to face substantial challenges due to significant disparities in achievement across provinces. This study aims to map these gaps by combining Principal Component Analysis (PCA) for dimensionality reduction and K-Means Clustering for regional grouping. Utilizing 2023 data from the Indonesian Central Bureau of Statistics (BPS) with eight key indicators, the analysis reveals that three principal components effectively capture 91.85% of the data variance. The clustering procedure successfully categorizes provinces into two distinct groups: 36 provinces in the high-achievement cluster and two provinces that lag significantly (Central Papua and Papua Mountains). A Silhouette Score of 0.782 confirms the high validity and consistency of the clustering results. These findings serve as a critical alert for policymakers to implement targeted interventions in underperforming regions to prevent further widening of the educational gap.</em></p>Erwin Arry KusumaAdani Dharmawati
Copyright (c) 2026 Erwin Arry Kusuma, Adani Dharmawati
https://creativecommons.org/licenses/by-sa/4.0
2026-04-302026-04-3016110511210.37859/jf.v16i1.11349Deteksi Bahasa Isyarat Menggunakan Arsitektur YOLOv8 Berbasis Website
https://ejurnal.umri.ac.id/index.php/JIK/article/view/11070
<p><em>Communication difficulties between the general public and people with hearing impairments due to limited access to real-time detection tools are the primary urgency of this research. This research aims to develop a cross-platform and easily accessible website-based sign language detection system, while implementing the YOLOv8 variant to remain accurate on devices with limited computing resources. The method used is Research and Development (R&D) with the AI Project Cycle framework, which includes data collection, preprocessing, modeling using the YOLOv8n variant, and implementation. The data used is sourced from the Roboflow platform, consisting of hand gesture images divided into 70% training data, 20% validation, and 10% testing. The results show that the YOLOv8n model provides high performance with a precision of 0.932, recall of 0.997, and mAP50 value of 0.995. Additionally, the model achieves an efficient inference speed averaging 2.1 ms. In conclusion, the implementation of YOLOv8 on a website-based successfully creates an accurate and responsive sign language detection system, making it suitable for assisting communication in real-world scenarios</em></p>Danang Arbian SulistyoMuhammad Faruqi Rabbani
Copyright (c) 2026 Danang Arbian Sulistyo, Muhammad Faruqi Rabbani
https://creativecommons.org/licenses/by-sa/4.0
2026-04-302026-04-30161101910.37859/jf.v16i1.11070Analisis Sentimen Ulasan Game Stardew Valley pada Steam dan Google Play
https://ejurnal.umri.ac.id/index.php/JIK/article/view/11217
<p><em>The large number of user reviews on Steam and Google Play platforms makes manual analysis difficult and prone to subjective bias. This study aims to analyze and compare user sentiment toward Stardew Valley game reviews on both platforms using a text mining approach. The data used consist of 25,099 Steam reviews and 25,594 Google Play reviews. The text preprocessing stage includes case folding, cleansing (removal of punctuation and non-alphabetic characters), tokenization, stopword removal, and lemmatization to produce more structured data. Sentiment labeling is performed using the VADER method, followed by feature extraction using TF-IDF and classification using the Multinomial "Naïve Bayes" algorithm. Model evaluation is conducted using 5-Fold Cross Validation with accuracy, precision, recall, and F1-score as evaluation metrics. The results show that most reviews on both platforms have positive sentiment. The classification model achieves an average accuracy of 0.8151 on Steam and 0.8382 on Google Play. In addition, the model obtains an average F1-score (macro average) of 0.55 on Steam and 0.40 on Google Play. These results indicate that the model performs adequately in sentiment classification, although it still has limitations in identifying minority sentiment classes such as negative and neutral</em><em>.</em></p>Surya Viari TampubolonDanang Arbian Sulistyo
Copyright (c) 2026 Surya Viari Tampubolon, Danang Arbian Sulistyo
https://creativecommons.org/licenses/by-sa/4.0
2026-04-302026-04-30161576710.37859/jf.v16i1.11217Implementasi Metode TOPSIS pada Sistem Pendukung Keputusan Penentuan Prioritas Penerima Bantuan Sosial Berbasis Aplikasi Desktop
https://ejurnal.umri.ac.id/index.php/JIK/article/view/11256
<p><em>The accurate distribution of social assistance remains a major challenge in improving community welfare. The process of determining eligible beneficiaries is often carried out manually, which can lead to subjectivity and inaccuracies in decision-making. Therefore, a decision support system is needed to assist the selection process by applying the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), enabling a more structured and objective evaluation. This study aims to implement the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method in determining the priority of social assistance recipients through a desktop-based application.</em> <em>The strength of TOPSIS lies in its ability to rank options based on their proximity to positive ideal solutions and to avoid negative optimal solutions. The criteria used in this study include monthly income, number of dependents, housing conditions, employment status, and productive assets. The system is developed as a desktop application equipped with features for data management, criteria weighting, and automated TOPSIS calculations to generate rankings of potential beneficiaries. The results of Black Box testing indicate that all system features function in accordance with the specified requirements, achieving a 100% success rate, thereby supporting a fast, accurate, and objective decision-making process. Therefore, this application is expected to enhance the effectiveness and transparency of social assistance distribution.</em></p>Azhyka Rizki RamadhanCandra NayaAbdillah AG
Copyright (c) 2026 Azhyka Rizki Ramadhan, Candra Naya, Abdillah AG
https://creativecommons.org/licenses/by-sa/4.0
2026-04-302026-04-30161203010.37859/jf.v16i1.11256Implementasi Extremely Randomized Trees dengan Optimasi Hyperparameter Accelerated Particle Swarm Optimization untuk Klasifikasi Subtipe Anemia
https://ejurnal.umri.ac.id/index.php/JIK/article/view/11295
<p><em>Anemia is a health problem that negatively affects both medical outcomes and social well-being, highlighting the need for accurate early detection. This study applies a machine learning approach to classify anemia subtypes to support clinical intervention and further examination. The Extra Trees method employs a hierarchical decision-tree structure with extreme randomization, making it robust to overfitting and capable of good generalization on small to medium datasets. Accelerated Particle Swarm Optimization (APSO) is utilized as an efficient optimization technique to improve classification performance. The novelty of this study lies in integrating Extra Trees with APSO to optimize anemia subtype classification. The dataset consists of 385 records collected from a regional hospital in East Java, Indonesia, covering four classes: thalassemia, iron deficiency anemia, anemia of chronic disease, and non-anemia. The features include patient initials, gender, age, and hematological parameters (Hb, HCT, RBC, MCV, MCH, MCHC, RDW). The optimized model achieved 85% accuracy, 87% precision, 85% recall, 85% F1-score, 95% specificity, and 94% AUC, outperforming the non-optimized model. These results indicate that the proposed approach is effective for anemia subtype classification.</em></p>Adelia AdeliaTrimono TrimonoMohammad Idhom
Copyright (c) 2026 Adelia Adelia, Trimono Trimono, Mohammad Idhom
https://creativecommons.org/licenses/by-sa/4.0
2026-04-302026-04-30161465610.37859/jf.v16i1.11295Implementasi Algoritma Random Forest untuk Menentukan Tingkat Keberhasilan Proyek pada Sistem Work Order (Studi Kasus: PT XYZ)
https://ejurnal.umri.ac.id/index.php/JIK/article/view/11115
<p><em>PT XYZ is a company focused on technological innovation to provide modern, effective, and efficient solutions across various aspects of life. As a pioneer in the technology evolution industry, PT XYZ combines expertise in software development and the latest technologies to create positive transformation for society and businesses. In project implementation, PT XYZ faces challenges in determining project success levels objectively and measurably, particularly within the context of the work order system. This condition leads to less optimal strategic decision-making, increased risk of losses, and difficulties in conducting comprehensive, data-driven project evaluations. To address these issues, this study develops a web-based project success prediction system within the work order system by implementing the Random Forest algorithm and the Agile development approach. The Random Forest algorithm is developed using the Python programming language to classify project success levels based on several historical parameters, such as completion duration, budget, and profit percentage. The system is equipped with a user interface developed using PHP with the Laravel framework and a MySQL database, enabling efficient and integrated data processing and visualization. The results show that the implementation of the Random Forest algorithm improves prediction accuracy and provides recommendations that can support management in decision-making. The Agile approach also offers high flexibility in adapting the system to user requirements. Through this system, PT XYZ is expected to optimize work order management and proactively minimize the risk of project failure in a data-driven manner.</em></p>Unggul Prasetyo UtomoHadi Zakaria
Copyright (c) 2026 Unggul Prasetyo Utomo, Hadi Zakaria
https://creativecommons.org/licenses/by-sa/4.0
2026-04-302026-04-30161869510.37859/jf.v16i1.11115Perbandingan Fuzzy Mamdani dan Sugeno dalam Optimasi Trading Bitcoin Berbasis Indikator Teknikal
https://ejurnal.umri.ac.id/index.php/JIK/article/view/11235
<p><em>This study compares Mamdani and Sugeno fuzzy inference systems for Bitcoin trading using historical BTC/USDT data. In highly volatile and non-linear cryptocurrency markets, especially during bear markets, conventional methods struggle to interpret ambiguous signals, making fuzzy logic suitable for adaptive decision-making. The dataset was collected from the Binance API for the period 20 November 2021 to 31 December 2022 and consists of 9,746 candlestick records</em>. <em>This period corresponds to a bear market phase, characterized by a significant downward trend in Bitcoin prices, which provides a challenging environment for evaluating trading strategies. Four technical indicators, Bollinger Bands, RSI, ADX, and PSAR, were used as input variables.</em> <em>The data were split into 70% training and 30% testing using a time-based approach. Performance evaluation was conducted through long-only backtesting using Total Profit, Win Rate, Maximum Drawdown, Sharpe Ratio, and Sortino Ratio. The results show that Mamdani achieved better profitability than Sugeno, with total profit of -34.17% on training data and -2.45% on testing data, while Sugeno produced -53.91% and -3.04%, respectively. Although both methods resulted in negative returns due to the bearish market conditions, their performance was better than the buy-and-hold strategy, which recorded losses of -65.78% on training data and -17.49% on testing data. This indicates that both fuzzy approaches were effective in reducing losses and improving risk management under extreme market conditions. However, Sugeno showed better risk control on testing data with a lower maximum drawdown of 18.72% compared to 25.01% for Mamdani. Overall, Mamdani is more suitable for return-oriented strategies, while Sugeno is more appropriate for risk management under bearish conditions.</em></p>Cynthia Dwi RahmadewiRizky ParlikaHendra Maulana
Copyright (c) 2026 Cynthia Dwi Rahmadewi, Rizky Parlika, Hendra Maulana
https://creativecommons.org/licenses/by-sa/4.0
2026-04-302026-04-30161687810.37859/jf.v16i1.11235Klasifikasi Rating Film Berdasarkan Genre Menggunakan XGBoost dan LightGBM serta Analisis SHAP
https://ejurnal.umri.ac.id/index.php/JIK/article/view/11273
<p><em>Movie rating is often used as an indicator of film quality and audience satisfaction. With the large availability of movie data on online platforms, machine learning techniques can be used to analyze the relationship between film characteristics and rating patterns. One important attribute that can influence movie ratings is genre. This study aims to classify movie ratings based on genre using the XGBoost and LightGBM algorithms and to analyze the contribution of each genre using SHAP (SHapley Additive Explanations). Movie data were collected from The Movie Database (TMDB) API and processed through several preprocessing stages including genre separation, data cleaning, one-hot encoding, and rating categorization. The dataset was then divided into training and testing data with a ratio of 70:30. The classification results show that XGBoost achieved an accuracy of 0.53, slightly higher than LightGBM with an accuracy of 0.52. Further analysis using SHAP indicates that genres such as Horror, Drama, Action, and Comedy have the highest global importance in the classification model. Meanwhile, the analysis of high-rating class predictions shows that Drama has the largest contribution to predicting movies with high ratings. The findings indicate that movie genres have a measurable influence on rating classification, although the importance of genres in the machine learning model does not always align with their average rating values.</em></p>Aprinia Salsabila RoiqohRizky ParlikaFirza Prima Aditiawan
Copyright (c) 2026 Aprinia Salsabila Roiqoh, Rizky Parlika, Firza Prima Aditiawan
https://creativecommons.org/licenses/by-sa/4.0
2026-04-302026-04-301619610410.37859/jf.v16i1.11273Analisis Klasifikasi Kekeruhan Air Berbasis Citra Dengan K-NN Pada Variasi Pencahayaan
https://ejurnal.umri.ac.id/index.php/JIK/article/view/11339
<p><em>Clean and high-quality water is an essential requirement for public health and the continuity of industrial processes, including at PT Pupuk Sriwidjaja Palembang. One of the main parameters of water quality is turbidity, which is related to the presence of suspended particles such as mud, organic matter, and microorganisms. This study aims to analyze the effect of lighting intensity variations on the performance of water turbidity classification based on digital image processing using the K-Nearest Neighbor (K-NN) algorithm. The experiment was conducted under five lighting intensity levels: 10, 30, 50, 80, and 100 lux. The research stages included image acquisition, pre-processing (resizing, color conversion, and normalization), feature extraction of color and texture using mean, standard deviation, and Gray Level Co-occurrence Matrix (GLCM), followed by classification using the K-NN algorithm. The value of k = 5 was selected because it provides a balance between sensitivity to noise and classification stability, and preliminary testing showed more consistent performance compared to smaller or larger k values. System performance evaluation was carried out using accuracy, precision, F1-score, and confusion matrix. The results showed that the best performance was achieved at 100 lux lighting intensity with an accuracy of 91.67%, precision of 93.33%, and F1-score of 91.53%, while the lowest performance occurred at 10 lux with an accuracy of 61.54%. These findings indicate that lighting intensity significantly affects turbidity classification performance, with optimal conditions found in the range of 80–100 lux. This study proves that proper lighting adjustment can improve the reliability of digital image-based classification systems for automatic water quality monitoring.</em></p>M. FatuhrahmanGasim GasimZaid Romegar Mair
Copyright (c) 2026 M. Fatuhrahman, Gasim Gasim, Zaid Romegar Mair
https://creativecommons.org/licenses/by-sa/4.0
2026-04-302026-04-3016112713210.37859/jf.v16i1.11339Perbandingan Algoritma Regresi dalam Memprediksi Penjualan Berdasarkan Indikator Sosial Ekonomi Kabupaten Cirebon (2010-2023)
https://ejurnal.umri.ac.id/index.php/JIK/article/view/9729
<p><em>A comparative study of four regression algorithms, namely Support Vector Regression (SVR), Gradient Boosting Regressor (GBR), Random Forest Regressor (RFR), and Extreme Gradient Boosting (XGBoost), was conducted to predict annual aggregate sales based on socioeconomic indicators in Cirebon Regency from 2010 to 2023. The study utilized secondary data obtained from the Central Bureau of Statistics (Badan Pusat Statistik) of Cirebon Regency. Five predictor variables were employed, including life expectancy, expected years of schooling, mean years of schooling, per capita expenditure, and the Human Development Index (HDI). Model performance was evaluated using Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and the coefficient of determination (R-squared). The experimental results indicate that the GBR model achieved the best predictive performance, with the lowest error values (MAE = 127.98 and RMSE = 185.63) and the highest R² value (0.94), outperforming RFR, XGBoost, and SVR after parameter tuning. Feature importance analysis consistently identified life expectancy as the most influential variable across models. These findings demonstrate that ensemble-based regression methods, particularly boosting algorithms, are effective for modeling complex socioeconomic patterns and can support data-driven economic forecasting and regional policy planning</em></p>Muthia RahmahKanaya RamadantiImelda Fransiska Aulia
Copyright (c) 2026 Muthia Rahmah, Kanaya Ramadanti, Imelda Fransiska Aulia
https://creativecommons.org/licenses/by-sa/4.0
2026-04-302026-04-301611910.37859/jf.v16i1.9729