Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup of the book + feature flags #773

Merged
merged 3 commits into from
Sep 6, 2023
Merged

Conversation

nathanielsimard
Copy link
Member

No description provided.

We call the `train` function defined earlier with a directory for artifacts, the configuration of the model (the number of digit classes is 10 and the hidden dimension is 512), the optimizer configuration which in our case will be the default Adam configuration, and the device which can be obtained from the backend.
In this example, we use the `WgpuBackend` which is compatible with any operating system and will use
the GPU. For other options, see the Burn README. This backend type takes the graphics api, the float
type and the int type as generic argument that will be used during the training. By leaving the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

arguments

@@ -63,8 +60,8 @@ There are two major things going on in this code sample.
underlying low level implementations of tensor operations, allowing your new model to run on any
backend. Contrary to other frameworks, the backend abstraction isn't determined by a compilation
flag or a device type. This is important because you can extend the functionalities of a specific
backend (which will be covered in the more advanced sections of this book), and it allows for an
innovative autodiff system. You can also change backend during runtime, for instance to compute
backend, such as [backend extension section](../advanced/backend-extension), and it allows for an
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change such as

- [Building Blocks](./building-blocks): Dive deeper into Burn's core components, understanding how
they fit together. This knowledge forms the basis for more advanced usage and customization.

- [Custom Training Loops](./custom-training-loop): Gain the power to customize your training loops,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Loop

@nathanielsimard nathanielsimard merged commit 8b3d10c into main Sep 6, 2023
@nathanielsimard nathanielsimard deleted the book/general-cleanup branch September 6, 2023 13:16
dae added a commit to ankitects/burn that referenced this pull request Sep 8, 2023
nathanielsimard pushed a commit that referenced this pull request Sep 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants