Skip to content

Commit

Permalink
@epcohs is deprecated
Browse files Browse the repository at this point in the history
  • Loading branch information
Saransh-cpp committed Jul 28, 2022
1 parent a8f4788 commit 418a778
Showing 1 changed file with 2 additions and 21 deletions.
23 changes: 2 additions & 21 deletions docs/src/getting_started/linear_regression.md
Original file line number Diff line number Diff line change
Expand Up @@ -225,35 +225,16 @@ julia> W, b, custom_loss(W, b, x, y)
It works, and the loss went down again! This was the second epoch of our training procedure. Let's plug this in a for loop and train the model for 30 epochs.

```jldoctest linear_regression_simple; filter = r"[+-]?([0-9]*[.])?[0-9]+"
julia> for i = 1:30
julia> for i = 1:40
train_custom_model()
end
julia> W, b, custom_loss(W, b, x, y)
(Float32[4.2408285], Float32[2.243728], 7.668049f0)
(Float32[4.2422233], Float32[2.2460847], 7.6680417f0)
```

There was a significant reduction in loss, and the parameters were updated!

`Flux` provides yet another convenience functionality, the [`Flux.@epochs`](@ref) macro, which can be used to train a model for a specific number of epochs.

```jldoctest linear_regression_simple; filter = r"[+-]?([0-9]*[.])?[0-9]+"
julia> Flux.@epochs 10 train_custom_model()
[ Info: Epoch 1
[ Info: Epoch 2
[ Info: Epoch 3
[ Info: Epoch 4
[ Info: Epoch 5
[ Info: Epoch 6
[ Info: Epoch 7
[ Info: Epoch 8
[ Info: Epoch 9
[ Info: Epoch 10
julia> W, b, custom_loss(W, b, x, y)
(Float32[4.2422233], Float32[2.2460847], 7.6680417f0)
```

We can train the model even more or tweak the hyperparameters to achieve the desired result faster, but let's stop here. We trained our model for 42 epochs, and loss went down from `22.74856` to `7.6680417f`. Time for some visualization!

### Results
Expand Down

0 comments on commit 418a778

Please sign in to comment.