In so far as a scientific statement speaks about reality, it must be falsifiable: and in so far as it is not falsifiable, it does not speak about reality – Karl R. Popper, The Logic of Scientific Discovery
Why do Random Forests Work? Understanding Tree Ensembles as Self-Regularizing Adaptive Smoothers
Alicia Curth, Alan Jeffares, Mihaela van der Schaar
arXiv 2024
A comparison of neural and non-neural machine learning models for food safety risk prediction with European Union RASFF data
Alberto Nogales, Rodrigo Díaz-Morón and Álvaro J.García-Tejedor
Food Control 2022
Why do tree-based models still outperform deep
learning on tabular data?
Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux
arXiv 2022
Revisiting Deep Learning Models for Tabular Data
Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, Artem Babenko
NeurIPS 2021
Do We Really Need Deep Learning Models for Time Series Forecasting?
Shereen Elsayed, Daniela Thyssens, Ahmed Rashed, Hadi Samer Jomaa, and Lars Schmidt-Thieme
arXiv 2021
On the cost-effectiveness of neural and non-neural approaches and representations for text classification: A comprehensive comparative study
Washington Cunha, Vítor Mangaravite, Christian Gomes, Sérgio Canuto, Elaine Resende, Cecilia Nascimento, Felipe Viegas, Celso França, Wellington Santos Martins, Jussara M. Almeida, Thierson Rosa, Leonardo Rocha, Marcos André Gonçalves
Information Processing & Management 2021
Non-neural Models Matter:
A Re-evaluation of Neural Referring Expression Generation Systems
Fahime Same, Guanyi Chen and Kees van Deemter
arXiv 2022
Top-N Recommendation Algorithms: A Quest for the State-of-the-Art
Vito Walter Anelli, Alejandro Bellogín, Tommaso Di Noia, Dietmar Jannach, Claudio Pomo
UMAP 2022
Revisiting the Performance of iALS on Item Recommendation Benchmarks
Steffen Rendle, Walid Krichene, Li Zhang, Yehuda Koren
RecSys 2022
Evaluation of algorithms for interaction-sparse recommendations: neural networks don’t always win
Yasamin Klingler, Claude Lehmann, João Pedro Monteiro, Carlo Saladin, Abraham Bernstein, Kurt Stockinger
EDBT 2022
Session-aware Recommendation: A Surprising Quest for the State-of-the-art
Sara Latifi, Noemi Mauro, Dietmar Jannach
Information Sciences 2021
Negative Interactions for Improved Collaborative Filtering: Don’t go Deeper, go Higher
Harald Steck, Dawen Liang
RecSys 2021
Why Are Deep Learning Models Not Consistently Winning Recommender Systems Competitions Yet?: A Position Paper
Dietmar Jannach, Gabriel Moreira, Even Oldridge
RecSys Challenge 2020
Modeling Personalized Item Frequency Information for Next-basket Recommendation
Haoji Hu, Xiangnan He, Jinyang Gao, Zhi-Li Zhang
SIGIR 2020
Neural Collaborative Filtering vs. Matrix Factorization Revisited
Steffen Rendle, Walid Krichene, Li Zhang, John Anderson
RecSys 2020
On the Difficulty of Evaluating Baselines
Steffen Rendle, Li Zhang, Yehuda Koren
arXiv 2019
Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches
Maurizio Ferrari Dacrema, Paolo Cremonesi, Dietmar Jannach
RecSys 2019
Embarrassingly Shallow Autoencoders for Sparse Data
Harald Steck
WWW 2019
Evaluation of Session-based Recommendation Algorithms
Malte Ludewig, Dietmar Jannach
User Modeling and User-Adapted Interaction 2018
Are Neural Rankers still Outperformed by Gradient Boosted Decision Trees?
Zhen Qin, Le Yan, Honglei Zhuang, Yi Tay, Rama Kumar Pasumarthi, Xuanhui Wang, Michael Bendersky, Marc Najork
ICLR 2021
Critically Examining the “Neural Hype”: Weak Baselines and the Additivity of Effectiveness Gains from Neural Ranking Models
Wei Yang, Kuang Lu, Peilin Yang, and Jimmy Lin
arXiv 2019
A Metric Learning Reality Check
Kevin Musgrave, Serge Belongie, Ser-Nam Li
arXiv 2020