Skip to content

Commit

Permalink
✨ update to use interlibrary links instead of Markdown (#18500)
Browse files Browse the repository at this point in the history
  • Loading branch information
stevhliu authored Aug 8, 2022
1 parent ec8d262 commit 36b3799
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions docs/source/en/accelerate.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Get started by installing 🤗 Accelerate:
pip install accelerate
```

Then import and create an [`Accelerator`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator) object. `Accelerator` will automatically detect your type of distributed setup and initialize all the necessary components for training. You don't need to explicitly place your model on a device.
Then import and create an [`~accelerate.Accelerator`] object. The [`~accelerate.Accelerator`] will automatically detect your type of distributed setup and initialize all the necessary components for training. You don't need to explicitly place your model on a device.

```py
>>> from accelerate import Accelerator
Expand All @@ -32,7 +32,7 @@ Then import and create an [`Accelerator`](https://huggingface.co/docs/accelerate
## Prepare to accelerate
The next step is to pass all the relevant training objects to the [`prepare`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.prepare) method. This includes your training and evaluation DataLoaders, a model and an optimizer:
The next step is to pass all the relevant training objects to the [`~accelerate.Accelerator.prepare`] method. This includes your training and evaluation DataLoaders, a model and an optimizer:
```py
>>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
Expand All @@ -42,7 +42,7 @@ The next step is to pass all the relevant training objects to the [`prepare`](ht

## Backward

The last addition is to replace the typical `loss.backward()` in your training loop with 🤗 Accelerate's [`backward`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.backward) method:
The last addition is to replace the typical `loss.backward()` in your training loop with 🤗 Accelerate's [`~accelerate.Accelerator.backward`]method:

```py
>>> for epoch in range(num_epochs):
Expand Down Expand Up @@ -121,7 +121,7 @@ accelerate launch train.py

### Train with a notebook

🤗 Accelerate can also run in a notebook if you're planning on using Colaboratory's TPUs. Wrap all the code responsible for training in a function, and pass it to `notebook_launcher`:
🤗 Accelerate can also run in a notebook if you're planning on using Colaboratory's TPUs. Wrap all the code responsible for training in a function, and pass it to [`~accelerate.notebook_launcher`]:

```py
>>> from accelerate import notebook_launcher
Expand Down

0 comments on commit 36b3799

Please sign in to comment.