-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trainer API #4
Comments
Are you still developing this? |
More no than yes. Initial goals of Avalon are now fulfilled with other libraries - Flux (which is now stable enough), Yota (graph-based AD) and ONNX (interop with existing DL frameworks). Avalon may still see another iteration of development as an alternative top-level package of the same tools, but at the moment it's not on the radar. |
I thought that in order to use |
Not really. Let me explain how some packages communicate with each other.
All these libraries are built with the idea of open ecosystem where you can combine tools in whatever way you want. For example, you can avoid any top-level deep learning libraries at all and instead use NNlib + Optimisers + any of the mentioned AD packages directly. Or use can use Flux, but replace Zygote with Yota and Optimisers with your own gradient descent implementation, etc. Unfortunately, flexibility of the open ecosystem and wide range of competing tools also means that proportionally less time is spent on testing each of the combinations. So don't expect flawless experience, yet the maintainers are usually very responsive. Some other packages you may want to know about:
|
I am aware of the structure of a Deep Learning framework and the approach in the Julia Eco System to build each element in its own with the vision of being able to do mix & match. This is a great vision, yet in practice, engineering guidelines says the tight development yields faster iterations and more robust results in short time while separation means independent development at the cost of challenging integration. Since we're in practice, I was after a combination which works well. Too bad not enough momentum was behind As you wrote, there is no syncing in the efforts. Too bad. |
High-level API for model training similar to PyTorch Lightning.
One advantage over Python is that we don't need a separate module type to add methods to a model class (e.g. train_step() and similar).
The text was updated successfully, but these errors were encountered: