Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rlhf 2 #253

Merged
merged 11 commits into from
Aug 23, 2024
Merged

Rlhf 2 #253

merged 11 commits into from
Aug 23, 2024

Conversation

Delaunay
Copy link
Collaborator

No description provided.

@Delaunay Delaunay changed the base branch from master to staging August 15, 2024 15:23
@Delaunay
Copy link
Collaborator Author

=================
Benchmark results
=================
bench                          | fail |   n |       perf |   sem% |   std% | peak_memory |      score | weight
brax                           |    1 |   1 |    3113.37 |   1.4% |   2.3% |         nan |       0.00 |   1.00
diffusion-gpus                 |    0 |   1 |     118.71 |   1.6% |  12.0% |       57786 |     118.71 |   1.00
diffusion-single               |    0 |   1 |     123.14 |   2.0% |  15.5% |       57778 |     123.14 |   0.00
dinov2-giant-gpus              |    0 |   1 |     449.30 |   0.2% |   1.8% |       69614 |     449.30 |   1.00
dinov2-giant-single            |    0 |   8 |      53.76 |   0.4% |   9.4% |       74646 |     434.53 |   0.00
bf16                           |    0 |   8 |     293.67 |   0.2% |   6.3% |        1788 |    2363.21 |   0.00
fp16                           |    0 |   8 |     289.76 |   0.1% |   3.7% |        1788 |    2326.12 |   0.00
fp32                           |    0 |   8 |      19.14 |   0.0% |   0.7% |        2166 |     153.21 |   0.00
tf32                           |    0 |   8 |     146.99 |   0.1% |   3.5% |        2166 |    1179.83 |   0.00
dimenet                        |    8 |   8 |        nan |   nan% |   nan% |         nan |        nan |   0.00
bert-fp16                      |    0 |   8 |     265.22 |   1.0% |  15.8% |         nan |    2177.91 |   0.00
bert-fp32                      |    0 |   8 |      44.85 |   0.6% |   9.4% |       21170 |     364.50 |   0.00
bert-tf32                      |    0 |   8 |     142.07 |   0.9% |  14.6% |        1764 |    1164.72 |   0.00
bert-tf32-fp16                 |    0 |   8 |     265.81 |   1.0% |  15.6% |         nan |    2182.42 |   3.00
reformer                       |    0 |   8 |      62.29 |   0.3% |   6.1% |       25404 |     501.96 |   1.00
t5                             |    0 |   8 |      51.48 |   0.5% |  10.0% |       34390 |     416.84 |   2.00
whisper                        |    0 |   8 |     497.92 |   1.0% |  22.5% |        1652 |    4026.64 |   1.00
lightning                      |    0 |   8 |     658.86 |   1.2% |  27.6% |       27360 |    5330.59 |   0.00
lightning-gpus                 |    0 |   1 |    4366.16 |   5.6% |  44.8% |       28192 |    4366.16 |   1.00
llama                          |    0 |   8 |     487.35 |   4.4% |  80.1% |       27820 |    3698.64 |   1.00
llm-full-mp-gpus               |    0 |   1 |     172.44 |   2.7% |  14.2% |       48574 |     172.44 |   1.00
llm-lora-ddp-gpus              |    0 |   1 |   16742.27 |   0.4% |   2.1% |       36988 |   16742.27 |   1.00
llm-lora-mp-gpus               |    0 |   1 |    1988.64 |   2.3% |  11.9% |       55972 |    1988.64 |   1.00
llm-lora-single                |    0 |   8 |    2728.03 |   0.2% |   2.9% |       49926 |   21878.79 |   1.00
recursiongfn                   |    8 |   8 |        nan |   nan% |   nan% |         nan |        nan |   0.00
super-slomo                    |    0 |   8 |      44.03 |   1.7% |  38.0% |       65928 |     354.53 |   1.00
focalnet                       |    0 |   8 |     380.30 |   0.6% |  13.9% |       23536 |    3081.79 |   2.00
torchatari                     |    0 |   8 |    5892.20 |   0.5% |  11.3% |        3834 |   46975.89 |   0.00
convnext_large-fp16            |    0 |   8 |     326.23 |   1.6% |  25.0% |       27376 |    2671.70 |   0.00
convnext_large-fp32            |    0 |   8 |      59.45 |   0.6% |   9.6% |       55950 |     483.31 |   0.00
convnext_large-tf32            |    0 |   8 |     155.83 |   0.9% |  13.6% |       49650 |    1275.55 |   0.00
convnext_large-tf32-fp16       |    0 |   8 |     347.93 |   1.1% |  17.2% |       27376 |    2853.39 |   3.00
regnet_y_128gf                 |    0 |   8 |     120.00 |   0.5% |  10.2% |       29762 |     971.58 |   2.00
resnet152-ddp-gpus             |    0 |   1 |    3322.57 |   6.9% |  52.9% |       27980 |    3322.57 |   0.00
resnet50                       |    0 |   8 |     999.06 |   2.1% |  45.8% |       14848 |    8068.29 |   1.00
resnet50-noio                  |    0 |   8 |    1163.86 |   0.3% |   6.9% |       27480 |    9386.94 |   0.00

@Delaunay
Copy link
Collaborator Author

=================
Benchmark results
=================
bench                          | fail |   n |       perf |   sem% |   std% | peak_memory |      score | weight
brax                           |    1 |   1 |    2823.54 |   4.7% |   9.3% |         nan |       0.00 |   1.00
diffusion-gpus                 |    0 |   1 |     111.47 |   1.6% |  11.9% |       57806 |     111.47 |   1.00
diffusion-single               |    0 |   1 |     124.66 |   2.1% |  15.7% |       57804 |     124.66 |   0.00
dinov2-giant-gpus              |    0 |   1 |     447.31 |   0.3% |   2.6% |       70048 |     447.31 |   1.00
dinov2-giant-single            |    0 |   8 |      53.66 |   0.4% |   9.3% |       74650 |     433.61 |   0.00
bf16                           |    0 |   8 |     293.49 |   0.2% |   6.5% |        1788 |    2361.95 |   0.00
fp16                           |    0 |   8 |     289.81 |   0.1% |   3.7% |        1788 |    2326.44 |   0.00
fp32                           |    0 |   8 |      19.13 |   0.0% |   0.8% |        2166 |     153.16 |   0.00
tf32                           |    0 |   8 |     146.98 |   0.1% |   3.7% |        2166 |    1179.93 |   0.00
dimenet                        |    0 |   8 |     336.45 |   0.8% |  17.5% |       12018 |    2722.97 |   0.00
bert-fp16                      |    0 |   8 |     265.43 |   1.0% |  15.7% |         nan |    2179.73 |   0.00
bert-fp32                      |    0 |   8 |      44.81 |   0.6% |   9.8% |       21170 |     364.47 |   0.00
bert-tf32                      |    0 |   8 |     142.24 |   0.9% |  13.8% |         nan |    1164.55 |   0.00
bert-tf32-fp16                 |    0 |   8 |     265.06 |   1.0% |  16.2% |         nan |    2177.70 |   3.00
reformer                       |    0 |   8 |      62.33 |   0.3% |   6.0% |       25404 |     502.24 |   1.00
t5                             |    0 |   8 |      51.44 |   0.4% |   9.8% |       34390 |     416.34 |   2.00
whisper                        |    0 |   8 |     441.95 |   1.5% |  33.0% |        8974 |    3567.18 |   1.00
lightning                      |    0 |   8 |     673.72 |   1.1% |  24.2% |       27360 |    5452.50 |   0.00
lightning-gpus                 |    0 |   1 |    3571.97 |   7.8% |  61.6% |       28190 |    3571.97 |   1.00
llama                          |    0 |   8 |     491.46 |   4.5% |  80.7% |       27820 |    3733.92 |   1.00
llm-full-mp-gpus               |    0 |   1 |     184.83 |   2.5% |  13.5% |       48780 |     184.83 |   1.00
llm-lora-ddp-gpus              |    0 |   1 |   16745.48 |   0.4% |   2.1% |       36988 |   16745.48 |   1.00
llm-lora-mp-gpus               |    0 |   1 |    1988.74 |   2.2% |  11.9% |       55972 |    1988.74 |   1.00
llm-lora-single                |    0 |   8 |    2727.40 |   0.2% |   2.9% |       49926 |   21873.98 |   1.00
recursiongfn                   |    0 |   8 |    7236.17 |   1.1% |  25.2% |       11404 |   58291.16 |   0.00
super-slomo                    |    0 |   8 |      46.06 |   1.5% |  33.3% |       65928 |     371.11 |   1.00
focalnet                       |    0 |   8 |     377.69 |   0.6% |  14.1% |       23536 |    3060.39 |   2.00
torchatari                     |    0 |   8 |    5945.31 |   0.5% |  11.7% |        3834 |   47394.26 |   0.00
convnext_large-fp16            |    0 |   8 |     329.48 |   1.5% |  23.7% |       27376 |    2698.60 |   0.00
convnext_large-fp32            |    0 |   8 |      59.51 |   0.6% |   9.4% |       55950 |     483.59 |   0.00
convnext_large-tf32            |    0 |   8 |     155.85 |   0.9% |  13.6% |       49650 |    1275.79 |   0.00
convnext_large-tf32-fp16       |    0 |   8 |     314.61 |   1.8% |  28.3% |       27376 |    2574.31 |   3.00
regnet_y_128gf                 |    0 |   8 |     119.74 |   0.4% |   9.8% |       29762 |     969.04 |   2.00
resnet152-ddp-gpus             |    0 |   1 |    2716.59 |   8.6% |  65.8% |       27980 |    2716.59 |   0.00
resnet50                       |    0 |   8 |    1027.77 |   1.9% |  41.6% |       14848 |    8305.04 |   1.00
resnet50-noio                  |    0 |   8 |    1164.79 |   0.3% |   6.4% |       27480 |    9389.36 |   0.00

@Delaunay Delaunay marked this pull request as ready for review August 23, 2024 11:24
@Delaunay Delaunay merged commit b9ca3db into mila-iqia:staging Aug 23, 2024
@Delaunay Delaunay deleted the rlhf_2 branch August 23, 2024 11:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant