-
Notifications
You must be signed in to change notification settings - Fork 458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extract onnx importing into it's own crate. #1812
Comments
I think we can still keep the crate in the burn repo, since it's gonna be easier to update it, but we can give it a name without the burn prefix, something like |
This is a great idea. From the beginning I was intending Burn's ONNX IR to be its own crate. For now, however, we should keep it within the burn repo with a non-burn name. We are still making changing to the IR. In addition to missing ONNX node types, we need to add three big features: Subgraphs (#724), functions (#723) and scalar graph inputs (#1688). Once we separate ONNX IR, we should put forward a design document that clearly states design goals that works for other projects as well. Here are the names, I would like to suggest. It would be great if we somehow convey an idea that we are unifying ONNX representation from various OpSets into one.
@skewballfox @laggui @louisfd @nathanielsimard what do you think about the names? |
I think |
Same as @laggui, but I also like |
Feature description
I think the files in
crates/burn-import/src/onnx/
listed below, along with the corresponding onnx_test, should be separated into their own crate:from_onnx.rs
coalesce.rs
node_remap.rs
ir.rs
proto_conversion.rs
dim_inference.rs
Feature motivation
I think it would be both beneficial to burn and multiple other rust DL projects to create a generalized onnx library, and burn is most of the way there. All of the code under those files are currently solving the general problem of parsing an Onnx graph. The burn-specific conversion doesn't happen until
to_burn.rs
.The benefit to burn is more contributors for that specific part of the op support and we could work ahead of the actual op support burn-side. We(I) could actually go ahead and generate all (or most) of the onnx models for the ops, along with the corresponding onnx test.
(Optional) Suggest a Solution
The test would have to be changed somewhat, either using macros or traits to make it to where libraries could drop in there tensor types, or storing the expect input/output into json or npy files in the same directory as the onnx file, along with a couple of macros for easy test generation.
I'm not sure whether the crate should stay under the burn repo, or be separated out completely
Also if you are open for suggestions to non-burn names, I suggest SardOnnx (after the red-colored variant of onyx)
The text was updated successfully, but these errors were encountered: