ACADSTAFF UGM

CREATION
Title : COMPARISON OF SMOTE RANDOM FOREST AND SMOTE K-NEAREST NEIGHBORS CLASSIFICATION ANALYSIS ON IMBALANCED DATA
Author :

JUS PRASETYA (1) Dr. Abdurakhman (2)

Date : 4 2023
Keyword : Machine Learning,Classification,SMOTE ,Random Forest K-Nearest Neighbors. Machine Learning,Classification,SMOTE ,Random Forest K-Nearest Neighbors.
Abstract : In machine learning study, classification analysis aims to minimize misclassification and also maximize the results of prediction accuracy. The main characteristic of this classification problem is that there is one class that significantly exceeds the number of samples of other classes. SMOTE minority class data is studied and extrapolated so that it can produce new synthetic samples. Random forest is a classification method consisting of a combination of mutually independent classification trees. K-Nearest Neighbors which is a classification method that labels the new sample based on the nearest neighbors of the new sample. SMOTE generates synthesis data in the minority class, namely class 1 (cervical cancer) to 585 observation respondents (samples) so that the total observation respondents are 1208 samples. SMOTE random forest resulted an accuracy of 96.28%, sensitivity 99.17%, specificity 93.44%, precision 93.70%, and AUC 96.30%. SMOTE K-Nearest Neighborss resulted an accuracy of 87.60%, sensitivity 77.50%, specificity 97.54%, precision 96.88%, and AUC 82.27%. SMOTE random forest produces a perfect classification model, SMOTE K-Nearest neighbors classification produces a good classification model, while the random forest and K-Nearest neighbors classification on imbalanced data results a failed classification model.
Group of Knowledge : Statistik
Original Language : English
Level : Nasional
Status :
Published
Document
No Title Document Type Action
1 Paper Jus Prasetyo-Abdurakhman.pdf
Document Type : [PAK] Full Dokumen
[PAK] Full Dokumen View