📜 arXiv 📚 Other tabular DL projects
Important
Check out the new tabular DL model: TabM (SoTA on TabReD)
TL;DR: TabReD is a new benchmark of industry-grade tabular datasets with temporally evolving and feature-rich real-world datasets
Advances in machine learning research drive progress in real-world applications. To ensure this progress, it is important to understand the potential pitfalls on the way from a novel method's success on academic benchmarks to its practical deployment. In this work, we analyze existing tabular benchmarks and find two common characteristics of tabular data in typical industrial applications that are underrepresented in the datasets usually used for evaluation in the literature. First, in real-world deployment scenarios, distribution of data often changes over time. To account for this distribution drift, time-based train/test splits should be used in evaluation. However, popular tabular datasets often lack timestamp metadata to enable such evaluation. Second, a considerable portion of datasets in production settings stem from extensive data acquisition and feature engineering pipelines. This can have an impact on the absolute and relative number of predictive, uninformative, and correlated features compared to academic datasets. In this work, we aim to understand how recent research advances in tabular deep learning transfer to these underrepresented conditions. To this end, we introduce TabReD -- a collection of eight industry-grade tabular datasets. We reassess a large number of tabular ML models and techniques on TabReD. We demonstrate that evaluation on time-based data splits leads to different methods ranking, compared to evaluation on random splits, which are common in current benchmarks. Furthermore, simple MLP-like architectures and GBDT show the best results on the TabReD datasets, while other methods are less effective in the new setting.
You can download and preprocess TabReD datasets by running scripts from the
./preprocessing
directory. For Kaggle datasets you shoul enroll the respective
competitions and have a Kaggle account.
Here is the initial rendition of TabReD with links to datasets and basic metadata:
Dataset | Features | Task | Instances Used | Instances Available | Link |
---|---|---|---|---|---|
Homesite Insurance | 299 | Classification | 260,753 | - | Competition |
Ecom Offers | 119 | Classification | 160,057 | - | Competition |
Homecredit Default | 696 | Classification | 381,664 | 1,526,659 | Competition |
Sberbank Housing | 392 | Regression | 28,321 | - | Competition |
Cooking Time | 192 | Regression | 319,986 | 12,799,642 | Dataset |
Delivery ETA | 223 | Regression | 416,451 | 17,044,043 | Dataset |
Maps Routing | 986 | Regression | 340,981 | 13,639,272 | Dataset |
Weather | 103 | Regression | 423,795 | 16,951,828 | Dataset |
./preprocessing
directory contains preprocessing scripts for all the datasets./exp
all exeperiment logs are in this folder./bin
scripts for launching the experiments./lib
library, dataloading, utilities
There are two environments: one for local development on machines without gpus -
tabred-env-local.yaml
, another for the machines with GPUs tabred-env.yaml
.
To create the environment with all the dependencies run micromamba create -f
with the env file of
choice (for example micromamba create -f tabred-env.yaml
on a server with GPUs).
To reproduce results for the MLP on the maps-routing dataset.
- Create an environment
- Create dataset (run preprocessing script)
- Run
export CUDA_VISIBLE_DEVICES=0
(or whatever device you like) - Run
python bin/go.py exp/mlp/maps-routing/tuning.toml --force
(force, deletes the existing outputs)
There is also a datasheet for the benchmark.