-
Notifications
You must be signed in to change notification settings - Fork 459
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add training
and inference
feature flags for selective re-export in burn
meta crate
#729
Comments
@nathanielsimard , @louisfd , @dae, what are your thoughts regarding this proposal? |
Yes, I think it's a good idea. Of course, people will be able to create their own training pipeline, datasets, and use their own implementation without being forced to compile everything (looking at you, LibTorch 😅), but having them nicely packaged by default can't hurt. Also, we might consider exporting the backends with none enabled by default. burn = { version = "0.9.0", features = ["torch", "ndarray", "wgpu"] } use burn::backend::{NdArrayBackend, TchBackend, WgpuBackend} What are your thoughts, @antimora? |
I suspect our app is a bit of an outlier here, as we'd like to use Burn for on-device training, but won't be taking advantage of the inference features (we'll just be feeding the output of our model directly into some simple equations). For the training case, you currently have to import the backend (and things like autodiff too?) separately. I think it would a bit easier to follow if you were all or nothing here - either making all functionality available with feature flags and re-exports and letting users depend on a single crate, or consistently requiring them to import what they need from each crate. |
We might as well do the whole thing as @dae has suggested, since the packages released in a lock step fashion anyway, and We will just come up with a good feature sets and names. Lets this idea brew some more - it will definitely make Burn user friendly. |
@dae @antimora I don't think we have specific packages for general inference. |
Based on my short experience tracking down #731, I do fear that a mix of feature flags + one-meta-crate-to-import-them-all might be rather tricky to manage/lead to a combinatorial explosion. |
Sounds like it might be tricky, but it might still be worth for basic use cases. For more nuanced, we can suggest to import crates individually. |
Partially implements tracel-ai#729: - The burn crate no longer enables the training module unless the train feature is enabled. - Added a new train-cli feature to enable the metric plotting/CLI rendering.
Partially implements tracel-ai#729: - The burn crate no longer enables the training module unless the train feature is enabled. - Added a new train-cli feature to enable the metric plotting/CLI rendering.
Partially implements tracel-ai#729: - The burn crate no longer enables the training module unless the train feature is enabled. - Added a new train-cli feature to enable the metric plotting/CLI rendering.
Partially implements tracel-ai#729: - Added a new train-cli feature to enable the metric plotting/CLI rendering.
We can consider as done since we can disable/enable training related features. Also it looks like we are reexporting the backends via flags as well. Closed via #773 and other linked PRs. |
The
burn
meta crate serves as an aggregator for various sub-crates, includingburn-core
,burn-train
, andburn-dataset
. It's often the only crate directly referenced in projects. To optimize for two main scenarios, we propose introducing the following feature flags:training
: Enables re-export of crates and modules specifically for training workflows.inference
: Enables only the minimal set of crates needed for inference tasks.By implementing these feature flags, we can make the
burn
crate more lightweight and tailored for specific use-cases.By default the two flags are enabled.
The text was updated successfully, but these errors were encountered: