Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add training and inference feature flags for selective re-export in burn meta crate #729

Closed
antimora opened this issue Aug 29, 2023 · 8 comments
Labels
enhancement Enhance existing features

Comments

@antimora
Copy link
Collaborator

antimora commented Aug 29, 2023

The burn meta crate serves as an aggregator for various sub-crates, including burn-core, burn-train, and burn-dataset. It's often the only crate directly referenced in projects. To optimize for two main scenarios, we propose introducing the following feature flags:

  1. training: Enables re-export of crates and modules specifically for training workflows.
  2. inference: Enables only the minimal set of crates needed for inference tasks.

By implementing these feature flags, we can make the burn crate more lightweight and tailored for specific use-cases.

By default the two flags are enabled.

@antimora antimora added the enhancement Enhance existing features label Aug 29, 2023
@antimora
Copy link
Collaborator Author

@nathanielsimard , @louisfd , @dae, what are your thoughts regarding this proposal?

@nathanielsimard
Copy link
Member

Yes, I think it's a good idea. Of course, people will be able to create their own training pipeline, datasets, and use their own implementation without being forced to compile everything (looking at you, LibTorch 😅), but having them nicely packaged by default can't hurt.

Also, we might consider exporting the backends with none enabled by default.

burn = { version = "0.9.0", features = ["torch", "ndarray", "wgpu"] }
use burn::backend::{NdArrayBackend, TchBackend, WgpuBackend}

What are your thoughts, @antimora?

@dae
Copy link
Contributor

dae commented Aug 30, 2023

I suspect our app is a bit of an outlier here, as we'd like to use Burn for on-device training, but won't be taking advantage of the inference features (we'll just be feeding the output of our model directly into some simple equations).

For the training case, you currently have to import the backend (and things like autodiff too?) separately. I think it would a bit easier to follow if you were all or nothing here - either making all functionality available with feature flags and re-exports and letting users depend on a single crate, or consistently requiring them to import what they need from each crate.

@antimora
Copy link
Collaborator Author

What are your thoughts, @antimora?

We might as well do the whole thing as @dae has suggested, since the packages released in a lock step fashion anyway, and burn is just a meta package.

We will just come up with a good feature sets and names. Lets this idea brew some more - it will definitely make Burn user friendly.

@nathanielsimard
Copy link
Member

@dae @antimora I don't think we have specific packages for general inference. Burn-import is a different beast, and you know when you need it. We could add a feature flag instead of adding each crate individually, but the complexity will grow if each sub-crate also has feature flags, like ndarray blas.

@dae
Copy link
Contributor

dae commented Aug 30, 2023

Based on my short experience tracking down #731, I do fear that a mix of feature flags + one-meta-crate-to-import-them-all might be rather tricky to manage/lead to a combinatorial explosion.

@antimora
Copy link
Collaborator Author

Sounds like it might be tricky, but it might still be worth for basic use cases. For more nuanced, we can suggest to import crates individually.

dae added a commit to ankitects/burn that referenced this issue Aug 31, 2023
Partially implements tracel-ai#729:

- The burn crate no longer enables the training module unless the
train feature is enabled.
- Added a new train-cli feature to enable the metric plotting/CLI
rendering.
@dae dae mentioned this issue Aug 31, 2023
1 task
dae added a commit to ankitects/burn that referenced this issue Aug 31, 2023
Partially implements tracel-ai#729:

- The burn crate no longer enables the training module unless the
train feature is enabled.
- Added a new train-cli feature to enable the metric plotting/CLI
rendering.
dae added a commit to ankitects/burn that referenced this issue Sep 1, 2023
Partially implements tracel-ai#729:

- The burn crate no longer enables the training module unless the
train feature is enabled.
- Added a new train-cli feature to enable the metric plotting/CLI
rendering.
dae added a commit to ankitects/burn that referenced this issue Sep 1, 2023
Partially implements tracel-ai#729:

- Added a new train-cli feature to enable the metric plotting/CLI
rendering.
@antimora
Copy link
Collaborator Author

antimora commented Sep 8, 2023

We can consider as done since we can disable/enable training related features. Also it looks like we are reexporting the backends via flags as well.

Closed via #773 and other linked PRs.

@antimora antimora closed this as completed Sep 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Enhance existing features
Projects
None yet
Development

No branches or pull requests

3 participants