How to Evaluate a Classifier Trained with an Imbalanced Dataset? Why Accuracy is not Enough?
Author: Murat Karakaya
Date created: 19 May 2020
Last modified: 09 Dec 2021
Description: In this tutorial series, we will discuss how to evaluate a classifier trained with an imbalanced dataset. We will see that accuracy metric is not enough to measure the performance of classifiers, especially, when you have an imbalanced dataset. Furthermore, we will implement 8 different classifier models and evaluate their success by comparing the various classification metric results. We will implement the solutions by Python and SciKit Learn library.
Accessible on:
- Murat Karakaya Akademi YouTube Channel in English or Turkish
- muratkarakaya.net
- Google Colab
- Kaggle
- Github
- Github pages
- Github Repo
Parts
I will deliver the content in 3 parts:
- Part A: Fundamentals, Metrics, Synthetic Dataset
- Part B: Dummy Classifiers, Accuracy, Precision, Recall, F1
- Part C: ROC, AUC, Worthless Test, Setting up threshold