A PyTorch based library for all things neural differential equations. Maintained by DiffEqML.
System / Python version | 3.6 | 3.7 | 3.8+ |
---|---|---|---|
Ubuntu 16.04 | |||
Ubuntu 18.04 | |||
Windows |
git clone https://github.com/DiffEqML/torchdyn.git
cd torchdyn
python setup.py install
https://torchdyn.readthedocs.io/
Interest in the blend of differential equations, deep learning and dynamical systems has been reignited by recent works [1,2]. Modern deep learning frameworks such as PyTorch, coupled with progressive improvements in computational resources have allowed the continuous version of neural networks, with proposals dating back to the 80s [3], to finally come to life and provide a novel perspective on classical machine learning problems (e.g. density estimation [4])
Since the introduction of the torchdiffeq
library with the seminal work [1] in 2018, little effort has been expended by the PyTorch research community on an unified framework for neural differential equations. While significant progress is being made by the Julia community and SciML [5], we believe a native PyTorch version of torchdyn
with a focus on deep learning to be a valuable asset for the research ecosystem.
Central to the torchdyn
approach are continuous neural networks, where width, depth (or both) are taken to their infinite limit. On the optimization front, we consider continuous "data-stream" regimes and gradient flow methods, where the dataset represents a time-evolving signal processed by the neural network to adapt its parameters.
By providing a centralized, easy-to-access collection of model templates, tutorial and application notebooks, we hope to speed-up research in this area and ultimately contribute to turning neural differential equations into an effective tool for control, system identification and common machine learning tasks.
The development of torchdyn
, sparked by the joint work of Michael Poli & Stefano Massaroli, has been supported throughout by their almae matres. In particular, by Prof. Jinkyoo Park (KAIST), Prof. Atsushi Yamashita (The University of Tokyo) and Prof. Hajime Asama (The University of Tokyo).
torchdyn
is maintained by the core DiffEqML team, with the generous support of the deep learning community.
The current offering of torchdyn
is limited compared to the rich ecosystem of continuous deep learning. If you are a researcher working in this space, and particularly if one of your previous works happens to be a WIP feature
, feel free to reach out and help us in its implementation.
- Basics: quickstart ✅, cookbook ✅
- Expressivity and augmentation: crossing trajectories ✅, augmentation ✅, higher order ✅
- Adjoint and beyond: generalized adjoint ✅, adaptive checkpointing ⬜️
- Regularization tutorials: regularization ⬜️ adaptive depth ⬜️ STEER ⬜️
- Controlled Neural DEs: data control ✅, neural cde ⬜️
- Energy models: hamiltonian nets ✅, lagrangian nets ✅, stable models ✅
- Image classification: MNIST ✅, CIFAR10 and ImageNet ⬜️
- Density estimation tutorials: continuous normalizing flows ✅, ffjord ✅, manifold cnf ⬜️
- Density estimation applications: MNIST ⬜️, CIFAR10 ⬜️
- Hybrid Neural DEs: hybrid models ⬜️
- Variational Neural DE tutorials: variational neural ode ⬜️ variational neural sde ⬜️
- Graph Neural DEs (GDEs) tutorials: gde node classification ✅ autoregressive gde ⬜️
- GDE applications: traffic forecasting ⬜️
- Solver suite: Euler ✅, Runge-Kutta(4) ✅, Dormand-Prince ⬜️, symplectic ⬜️, stiff ode ⬜️, euler-maruyama ✅, higher order sde ⬜️
- Specific variants: ode2vae ⬜️, anodev2 ⬜️, gruode-bayes ⬜️, neural jump stochastic ⬜️, ode2ode ⬜️, hamiltonian cnf ⬜️
torchdyn
leverages modern PyTorch best practices and handles training with pytorch-lightning
[6]. We build Graph Neural ODEs utilizing the Graph Neural Networks (GNNs) API of dgl
[6].
Our aim with torchdyn
aims is to provide a unified, flexible API to the most recent advances in continuous deep learning. Examples include neural differential equations variants, e.g.
- Neural Ordinary Differential Equations (Neural ODE) [1]
- Neural Stochastic Differential Equations (Neural SDE) [7,8]
- Graph Neural ODEs [9]
- Hamiltonian Neural Networks [10]
Depth-variant versions,
Recurrent or "hybrid" versions
Augmentation strategies to relieve neural differential equations of their expressivity limitations and reduce the computational burden of the numerical solver
Alternative or modified adjoint training techniques
The current version of torchdyn
contains the following self-contained quickstart examples / tutorials (with a lot more to come):
00_quickstart
: offers a quickstart guide fortorchdyn
and Neural DEs01_cookbook
: here, we explore the API and how to define Neural DE variants withintorchdyn
02_image_classification
: convolutional Neural DEs on MNIST03_crossing_trajectories
: a standard benchmark problem, highlighting expressivity limitations of Neural DEs, and how they can be addressed04_augmentation_strategies
: augmentation API for Neural DEs
and the advanced tutorials
05_generalized_adjoint
: minimize integral losses withtorchdyn
's special integral loss adjoint [18] to track a sinusoidal signal06_higher_order
: higher-order Neural ODE variants for classification07a_continuous_normalizing_flows
: recover densities with continuous normalizing flows [1]07b_ffjord
: recover densities with FFJORD variants of continuous normalizing flows [19]08_hamiltonian_nets
: learn dynamics of energy preserving systems with a simple implementation ofHamiltonian Neural Networks
intorchdyn
[10]09_lagrangian_nets
: learn dynamics of energy preserving systems with a simple implementation ofLagrangian Neural Networks
intorchdyn
[12]10_stable_neural_odes
: learn dynamics of dynamical systems with a simple implementation ofStable Neural Flows
intorchdyn
[18]11_gde_node_classification
: first steps into the vast world of Neural GDEs [9], or ODEs on graphs parametrized by graph neural networks (GNN). Classification on Cora
Check our wiki
for a full description of available features.
torchdyn
is meant to be a community effort: we welcome all contributions of tutorials, model variants, numerical methods and applications related to continuous deep learning. We do not have specific style requirements, though we subscribe to many of Jeremy Howard's ideas.
Choosing what to work on: There is always ongoing work on new features, tests and tutorials. Contributing to any of the above is extremely valuable to us. If you wish to work on additional features not currently WIP, feel free to reach out on Slack or via email. We'll be glad to discuss details.
If you find torchdyn
valuable for your research or applied projects:
@article{massaroli2020stable,
title={Stable Neural Flows},
author={Massaroli, Stefano and Poli, Michael and Bin, Michelangelo and Park, Jinkyoo and Yamashita, Atsushi and Asama, Hajime},
journal={arXiv preprint arXiv:2003.08063},
year={2020}
}