Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A new linear regression tutorial #2016

Merged
merged 24 commits into from
Nov 22, 2022
Merged
Show file tree
Hide file tree
Changes from 22 commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
b7c4ae9
Create a getting started section and add a new linear regression example
Saransh-cpp Jul 5, 2022
2f74f37
Minor improvements
Saransh-cpp Jul 6, 2022
d3526e9
Enable doctests
Saransh-cpp Jul 6, 2022
a1e49ad
Update code blocks to get rid of `Flux.params`
Saransh-cpp Jul 14, 2022
2605f92
Update the text to manually run gradient descent
Saransh-cpp Jul 14, 2022
bca37be
Fix doctests
Saransh-cpp Jul 15, 2022
7670145
Minor language fixes
Saransh-cpp Jul 16, 2022
288f4ad
Better variable names and cleaner print statements
Saransh-cpp Jul 16, 2022
8cab77b
`@epcohs` is deprecated
Saransh-cpp Jul 28, 2022
0a03ab5
Update docs/src/getting_started/linear_regression.md
Saransh-cpp Aug 15, 2022
f55603f
Update docs/src/getting_started/linear_regression.md
Saransh-cpp Aug 15, 2022
91b1260
Update docs/src/getting_started/linear_regression.md
Saransh-cpp Aug 15, 2022
055f6a4
Show data
Saransh-cpp Aug 15, 2022
8f89bd7
More general regex
Saransh-cpp Aug 15, 2022
51f8a38
Minor bug in the guide
Saransh-cpp Aug 22, 2022
36d7578
Better introduction to a ML pipeline
Saransh-cpp Aug 23, 2022
df06a6d
Move to the new Getting Started section?
Saransh-cpp Oct 18, 2022
b67a3a9
Create a new 'tutorials' section
Saransh-cpp Oct 25, 2022
768543c
Fix doctests
Saransh-cpp Oct 25, 2022
13cb623
Try fixing spaces
Saransh-cpp Oct 25, 2022
17d167e
More doctest fixing
Saransh-cpp Oct 25, 2022
0350e03
Move to the existing tutorials section
Saransh-cpp Oct 27, 2022
6b64b58
Revert structure + use ids
Saransh-cpp Oct 27, 2022
25eea17
Merge branch 'master' into linear-regression
ToucheSir Nov 22, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,12 +1,16 @@
[deps]
BSON = "fbb218c0-5317-5bc6-957e-2ee96dd4b1f0"
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Functors = "d9f16b24-f501-4c13-a1f2-28368ffc5196"
MLDatasets = "eb30cadb-4394-5ae3-aed4-317e484a6458"
MLUtils = "f1d291b0-491e-4a28-83b9-f70985020b54"
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
OneHotArrays = "0b1bfda6-eb8a-41d2-88d8-f5af5cad476f"
Optimisers = "3bd65402-5787-11e9-1adc-39752487f4e2"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"

[compat]
Expand Down
17 changes: 9 additions & 8 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
using Documenter, Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore
using Documenter, Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore, Plots, MLDatasets, Statistics, DataFrames


DocMeta.setdocmeta!(Flux, :DocTestSetup, :(using Flux); recursive = true)

makedocs(
modules = [Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore, Base],
modules = [Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore, Base, Plots, MLDatasets, Statistics, DataFrames],
doctest = false,
sitename = "Flux",
# strict = [:cross_references,],
pages = [
"Getting Started" => [
"Welcome" => "index.md",
"Quick Start" => "models/quickstart.md",
"Fitting a Line" => "models/overview.md",
"Gradients and Layers" => "models/basics.md",
"Quick Start" => "getting_started/quickstart.md",
"Fitting a Line" => "getting_started/overview.md",
"Gradients and Layers" => "getting_started/basics.md",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can I suggest leaving files where they are, until sure? I think moving them may break some links elsewhere. And we may re-organise this into Guide / Reference.

(Adding id & linking by that, not heading name nor file name, seems like the right solution.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the references associated with these pages now use ids! I have also reverted back the structural changes.

],
"Building Models" => [
"Built-in Layers 📚" => "models/layers.md",
Expand Down Expand Up @@ -41,11 +41,12 @@ makedocs(
"Flat vs. Nested 📚" => "destructure.md",
"Functors.jl 📚 (`fmap`, ...)" => "models/functors.md",
],
"Tutorials" => [
"Linear Regression" => "tutorials/linear_regression.md",
"Custom Layers" => "tutorials/advanced.md", # TODO move freezing to Training
],
"Performance Tips" => "performance.md",
"Flux's Ecosystem" => "ecosystem.md",
"Tutorials" => [ # TODO, maybe
"Custom Layers" => "models/advanced.md", # TODO move freezing to Training
],
],
format = Documenter.HTML(
sidebar_sitename = false,
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
2 changes: 1 addition & 1 deletion docs/src/gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ true

Support for array operations on other hardware backends, like GPUs, is provided by external packages like [CUDA](https://github.com/JuliaGPU/CUDA.jl). Flux is agnostic to array types, so we simply need to move model weights and data to the GPU and Flux will handle it.

For example, we can use `CUDA.CuArray` (with the `cu` converter) to run our [basic example](models/basics.md) on an NVIDIA GPU.
For example, we can use `CUDA.CuArray` (with the `cu` converter) to run our [basic example](getting_started/basics.md) on an NVIDIA GPU.

(Note that you need to have CUDA available to use CUDA.CuArray – please see the [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) instructions for more details.)

Expand Down
4 changes: 2 additions & 2 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ Other closely associated packages, also installed automatically, include [Zygote

## Learning Flux

The [quick start](models/quickstart.md) page trains a simple neural network.
The [quick start](getting_started/quickstart.md) page trains a simple neural network.

This rest of this documentation provides a from-scratch introduction to Flux's take on models and how they work, starting with [fitting a line](models/overview.md). Once you understand these docs, congratulations, you also understand [Flux's source code](https://github.com/FluxML/Flux.jl), which is intended to be concise, legible and a good reference for more advanced concepts.
This rest of this documentation provides a from-scratch introduction to Flux's take on models and how they work, starting with [fitting a line](getting_started/overview.md). Once you understand these docs, congratulations, you also understand [Flux's source code](https://github.com/FluxML/Flux.jl), which is intended to be concise, legible and a good reference for more advanced concepts.

Sections with 📚 contain API listings. The same text is avalable at the Julia prompt, by typing for example `?gpu`.

Expand Down
3 changes: 1 addition & 2 deletions docs/src/models/activation.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@

# Activation Functions from NNlib.jl
# [Activation Functions from NNlib.jl](@id man-activations)

These non-linearities used between layers of your model are exported by the [NNlib](https://github.com/FluxML/NNlib.jl) package.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/models/functors.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Flux models are deeply nested structures, and [Functors.jl](https://github.com/F

New layers should be annotated using the `Functors.@functor` macro. This will enable [`params`](@ref Flux.params) to see the parameters inside, and [`gpu`](@ref) to move them to the GPU.

`Functors.jl` has its own [notes on basic usage](https://fluxml.ai/Functors.jl/stable/#Basic-Usage-and-Implementation) for more details. Additionally, the [Advanced Model Building and Customisation](../models/advanced.md) page covers the use cases of `Functors` in greater details.
`Functors.jl` has its own [notes on basic usage](https://fluxml.ai/Functors.jl/stable/#Basic-Usage-and-Implementation) for more details. Additionally, the [Advanced Model Building and Customisation](../tutorials/advanced.md) page covers the use cases of `Functors` in greater details.

```@docs
Functors.@functor
Expand Down
2 changes: 1 addition & 1 deletion docs/src/training/optimisers.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ CurrentModule = Flux

# Optimisers

Consider a [simple linear regression](../models/basics.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
Consider a [simple linear regression](../tutorials/linear_regression.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.

```julia
using Flux
Expand Down
10 changes: 5 additions & 5 deletions docs/src/training/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,12 +36,12 @@ Flux.Optimise.train!
```

There are plenty of examples in the [model zoo](https://github.com/FluxML/model-zoo), and
more information can be found on [Custom Training Loops](../models/advanced.md).
more information can be found on [Custom Training Loops](../tutorials/advanced.md).

## Loss Functions

The objective function must return a number representing how far the model is from its target – the *loss* of the model. The `loss` function that we defined in [basics](../models/basics.md) will work as an objective.
In addition to custom losses, a model can be trained in conjunction with
The objective function must return a number representing how far the model is from its target – the *loss* of the model. The `loss` function that we defined in [basics](../getting_started/basics.md) will work as an objective.
In addition to custom losses, model can be trained in conjuction with
the commonly used losses that are grouped under the `Flux.Losses` module.
We can also define an objective in terms of some model:

Expand All @@ -64,11 +64,11 @@ At first glance, it may seem strange that the model that we want to train is not

## Model parameters

The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](../models/basics.md) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `Flux.params(m)`.
The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](../getting_started/basics.md) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `Flux.params(m)`.

Such an object contains a reference to the model's parameters, not a copy, such that after their training, the model behaves according to their updated values.

Handling all the parameters on a layer-by-layer basis is explained in the [Layer Helpers](../models/basics.md) section. For freezing model parameters, see the [Advanced Usage Guide](../models/advanced.md).
Handling all the parameters on a layer by layer basis is explained in the [Layer Helpers](../getting_started/basics.md) section. Also, for freezing model parameters, see the [Advanced Usage Guide](../tutorials/advanced.md).

```@docs
Flux.params
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ For an intro to Flux and automatic differentiation, see this [tutorial](https://

## Customising Parameter Collection for a Model

Taking reference from our example `Affine` layer from the [basics](basics.md#Building-Layers-1).
Taking reference from our example `Affine` layer from the [basics](../getting_started/basics.md#Building-Layers-1).

By default all the fields in the `Affine` type are collected as its parameters, however, in some cases it may be desired to hold other metadata in our "layers" that may not be needed for training, and are hence supposed to be ignored while the parameters are collected. With Flux, it is possible to mark the fields of our layers that are trainable in two ways.

Expand Down
Loading