Task: Examples represent 1 and 0 instances of Fraudulent and Genuine Transaction of European customers.
Model, Training Time: 37min 21s, Accuracy: 100% but note that 99.8% of the transactions were Genuine and 0.1667% were Fraudulent, so it likely that 99% of the time the model will predict the transaction as Genuine. So it is better we focus on other metrics such as f1-score (Genuine: 100%, Fraud: 87%), precision (Genuine: 100%, Fraud: 95%) and recall (Genuine: 100%, Fraud: 81%) since the dataset is imbalanced.
Dataset available on: Kaggle Credit Card Fraud Detection
Developers' Guide: Amazon Machine Learning, Google Machine Learning Education
Link to the complete notebook: Credit Card Fraud Detection -xgboost
Algorithm | Precision | Recall | F1-score | Accuracy |
---|---|---|---|---|
Xgboost(GridSearchCV | 100% | 100% | 100% | 100% |
It is important that credit card companies are able to recognize fraudulent credit card transactions so that customers are not charged for items that they did not purchase.
The dataset contains transactions made by credit cards in September 2013 by European cardholders.
This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is
highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.
It contains only numerical input variables which are the result of a PCA transformation. Unfortunately, due to confidentiality issues, we
cannot provide the original features and more background information about the data. Features V1, V2, β¦ V28 are the principal
components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time'
contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction
Amount, this feature can be used for example-dependant cost-sensitive learning. Feature 'Class' is the response variable and it takes
value 1 in case of fraud and 0 otherwise.
Given the class imbalance ratio, we recommend measuring the accuracy using the Area Under the Precision-Recall Curve (AUPRC).
Confusion matrix accuracy is not meaningful for unbalanced classification.