Skip to content

Commit

Permalink
Updated README.ipynb (and generated README.md)
Browse files Browse the repository at this point in the history
  • Loading branch information
sradc committed Oct 3, 2023
1 parent 45dd35d commit ec0b07b
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 119 deletions.
1 change: 1 addition & 0 deletions README.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
"\n",
"# MakeAgent\n",
"\n",
"WIP\n",
"\n",
"### Setup\n",
"\n",
Expand Down
125 changes: 6 additions & 119 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,125 +1,12 @@
<!-- Warning, README.md is autogenerated from README.ipynb, do not edit it directly -->

`pip install lr_schedules`
# MakeAgent

[![](https://github.com/sradc/lr_schedules/workflows/Python%20package/badge.svg)](https://github.com/sradc/lr_schedules/commits/)
WIP

# lr_schedules
### Setup

This project currently just contains `LinearScheduler`, for custom linear learning rate schedules.


```python
from lr_schedules import LinearScheduler
import matplotlib.pyplot as plt
import torch
```

## PyTorch example, triangle


```python
times = [0, 0.5, 1]
values = [0, 1, 0]

W = torch.tensor([1.0], requires_grad=True)
optimizer = torch.optim.SGD([W], lr=0.1)
linear_scheduler = LinearScheduler(times, values, total_training_steps=100)
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, linear_scheduler)

lr_vals = []
for step in range(100):
optimizer.zero_grad()
loss = torch.sum(W**2)
loss.backward()
optimizer.step()
scheduler.step()
lr_vals.append(optimizer.param_groups[0]["lr"])

plt.figure(figsize=(5, 2))
plt.plot(lr_vals)
plt.xlabel("Training step")
plt.ylabel("Learning rate")
plt.show()
```



![README_files/README_3_0.png](https://raw.githubusercontent.com/sradc/lr_schedules/master/README_files/README_3_0.png)



## Pytorch example, ramp up and down


```python
times = [0, 0.1, 0.9, 1]
values = [0, 1, 0.9, 0]

W = torch.tensor([1.0], requires_grad=True)
optimizer = torch.optim.SGD([W], lr=0.1)
linear_scheduler = LinearScheduler(times, values, total_training_steps=100)
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, linear_scheduler)

lr_vals = []
for step in range(100):
optimizer.zero_grad()
loss = torch.sum(W**2)
loss.backward()
optimizer.step()
scheduler.step()
lr_vals.append(optimizer.param_groups[0]["lr"])

plt.figure(figsize=(5, 2))
plt.plot(lr_vals)
plt.xlabel("Training step")
plt.ylabel("Learning rate")
plt.show()
```



![README_files/README_5_0.png](https://raw.githubusercontent.com/sradc/lr_schedules/master/README_files/README_5_0.png)



## Pytorch example, specifying absolute number of steps


```python
times = [0, 12, 90, 100]
values = [0, 1, 0.8, 0]

W = torch.tensor([1.0], requires_grad=True)
optimizer = torch.optim.SGD([W], lr=0.1)
linear_scheduler = LinearScheduler(times, values)
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, linear_scheduler)

lr_vals = []
for step in range(100):
optimizer.zero_grad()
loss = torch.sum(W**2)
loss.backward()
optimizer.step()
scheduler.step()
lr_vals.append(optimizer.param_groups[0]["lr"])

plt.figure(figsize=(5, 2))
plt.plot(lr_vals)
plt.xlabel("Training step")
plt.ylabel("Learning rate")
plt.show()
```



![README_files/README_7_0.png](https://raw.githubusercontent.com/sradc/lr_schedules/master/README_files/README_7_0.png)



## Dev set up of repo

- Clone the repo
- Install `poetry` (repo was run with python3.9)
- Run `poetry install --with docs`
- Clone the repo and `cd` into it
- Run `poetry install`
- Run `poetry run pre-commit install`

0 comments on commit ec0b07b

Please sign in to comment.