{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":790220063,"defaultBranch":"main","name":"instructlab","ownerLogin":"cdoern","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2024-04-22T13:36:52.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/22475215?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1725544076.0","currentOid":""},"activityList":{"items":[{"before":"e36cd78f4ac1a7cb04ccf2b38c8fb0aaeea648f5","after":"925ccdad7a47f7fc99daed29567d502587bf00d9","ref":"refs/heads/train-mps-cpu","pushedAt":"2024-09-17T02:04:38.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"add testing for new training functionality\n\nthe existing e2e tests needed to be adapted to now have the following options\n\ns: run pipeline simple training\nf: run pipeline full training\na: run accelerated library training\n\nunit tests switched from referencing --legacy to using --pipeline\n\nthe different training techiques changed ordering in this rewrite meaning some safeguards (that probably always should have been there) need to be put in place to ensure\ncertain checks only happen if we are executing full, accelerated, or multiphase train\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"add testing for new training functionality"}},{"before":"192f7e5fd5766df41526961cc46fbb5f2325aa43","after":"e36cd78f4ac1a7cb04ccf2b38c8fb0aaeea648f5","ref":"refs/heads/train-mps-cpu","pushedAt":"2024-09-17T01:32:00.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"add testing for new training functionality\n\nthe existing e2e tests needed to be adapted to now have the following options\n\ns: run pipeline simple training\nf: run pipeline full training\na: run accelerated library training\n\nunit tests switched from referencing --legacy to using --pipeline\n\nthe different training techiques changed ordering in this rewrite meaning some safeguards (that probably always should have been there) need to be put in place to ensure\ncertain checks only happen if we are executing full, accelerated, or multiphase train\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"add testing for new training functionality"}},{"before":"8df4531bccb7feba9bcca84476649f1a0efc138e","after":"192f7e5fd5766df41526961cc46fbb5f2325aa43","ref":"refs/heads/train-mps-cpu","pushedAt":"2024-09-17T00:46:20.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"linting and test fixes\n\nthe different training techiques changed ordering in this rewrite meaning some safeguards (that probably always should have been there) need to be put in place to ensure\ncertain checks only happen if we are executing full, accelerated, or multiphase train\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"linting and test fixes"}},{"before":"cff596a300f030edc5b9cb899f379705515a89f1","after":"8df4531bccb7feba9bcca84476649f1a0efc138e","ref":"refs/heads/train-mps-cpu","pushedAt":"2024-09-17T00:39:25.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"linting and test fixes\n\nthe different training techiques changed ordering in this rewrite meaning some safeguards (that probably always should have been there) need to be put in place to ensure\ncertain checks only happen if we are executing full, accelerated, or multiphase train\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"linting and test fixes"}},{"before":"c4c9a396d090e30bbfcfb4692ff80cff1d866544","after":"1ac32eab22a2b150fd9bd40133b3f2780ad1c606","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-16T21:17:47.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, the GPU type, and the GPU amount. Use this information to choose the train profile which best matches the amount of GPUs/the GPU type. If none of these match, then match based off of vRAM. for good measure, set nproc_per_node at the end if basing of off vRAM. in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"c2f57f8242bbc9a132ff67dfccb4a9dce6bfd80e","after":"c4c9a396d090e30bbfcfb4692ff80cff1d866544","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-16T21:14:50.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, the GPU type, and the GPU amount. Use this information to choose the train profile which best matches the amount of GPUs/the GPU type. If none of these match, then match based off of vRAM. for good measure, set nproc_per_node at the end if basing of off vRAM. in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"4b7fbaf17464953b52830107e0371b067d939fc2","after":"c2f57f8242bbc9a132ff67dfccb4a9dce6bfd80e","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-16T21:03:20.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, the GPU type, and the GPU amount. Use this information to choose the train profile which best matches the amount of GPUs/the GPU type. If none of these match, then match based off of vRAM. for good measure, set nproc_per_node at the end if basing of off vRAM. in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"204bbd727b6a1f84efbae5e4c0a5d3d249fd06b9","after":"cff596a300f030edc5b9cb899f379705515a89f1","ref":"refs/heads/train-mps-cpu","pushedAt":"2024-09-16T00:46:26.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"linting and test fixes\n\nthe different training techiques changed ordering in this rewrite meaning some safeguards (that probably always should have been there) need to be put in place to ensure\ncertain checks only happen if we are executing full, accelerated, or multiphase train\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"linting and test fixes"}},{"before":"2fbfc0120b7f33b1e73f055773782d27cc31c742","after":"204bbd727b6a1f84efbae5e4c0a5d3d249fd06b9","ref":"refs/heads/train-mps-cpu","pushedAt":"2024-09-16T00:30:34.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"linting and test fixes\n\nthe different training techiques changed ordering in this rewrite meaning some safeguards (that probably always should have been there) need to be put in place to ensure\ncertain checks only happen if we are executing full, accelerated, or multiphase train\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"linting and test fixes"}},{"before":"7195405c62a5984c3ba8da5fc9b789097b22fbe5","after":"2fbfc0120b7f33b1e73f055773782d27cc31c742","ref":"refs/heads/train-mps-cpu","pushedAt":"2024-09-16T00:18:29.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"linting and test fixes\n\nthe different training techiques changed ordering in this rewrite meaning some safeguards (that probably always should have been there) need to be put in place to ensure\ncertain checks only happen if we are executing full, accelerated, or multiphase train\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"linting and test fixes"}},{"before":"66f3d87f2992ab4abdb88deec015dfc1e05af6c2","after":"7195405c62a5984c3ba8da5fc9b789097b22fbe5","ref":"refs/heads/train-mps-cpu","pushedAt":"2024-09-16T00:11:39.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"linting and test fixes\n\nthe different training techiques changed ordering in this rewrite meaning some safeguards (that probably always should have been there) need to be put in place to ensure\ncertain checks only happen if we are executing full, accelerated, or multiphase train\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"linting and test fixes"}},{"before":"a0e9c53da9a2b7be1b58a9ab83bf24d7bce79039","after":"66f3d87f2992ab4abdb88deec015dfc1e05af6c2","ref":"refs/heads/train-mps-cpu","pushedAt":"2024-09-16T00:09:15.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"linting and test fixes\n\nthe different training techiques changed ordering in this rewrite meaning some safeguards (that probably always should have been there) need to be put in place to ensure\ncertain checks only happen if we are executing full, accelerated, or multiphase train\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"linting and test fixes"}},{"before":"d597f11ef9314a463e29dbdf1487ce3b84e9abea","after":"a0e9c53da9a2b7be1b58a9ab83bf24d7bce79039","ref":"refs/heads/train-mps-cpu","pushedAt":"2024-09-16T00:05:20.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"linting and test fixes\n\nthe different training techiques changed ordering in this rewrite meaning some safeguards (that probably always should have been there) need to be put in place to ensure\ncertain checks only happen if we are executing full, accelerated, or multiphase train\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"linting and test fixes"}},{"before":"0cf0805c4a65ad37fce791a3244843bda1f3dc9d","after":"d597f11ef9314a463e29dbdf1487ce3b84e9abea","ref":"refs/heads/train-mps-cpu","pushedAt":"2024-09-15T21:03:47.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"linting and test fixes\n\nthe different training techiques changed ordering in this rewrite meaning some safeguards (that probably always should have been there) need to be put in place to ensure\ncertain checks only happen if we are executing full, accelerated, or multiphase train\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"linting and test fixes"}},{"before":"dfa3106f3f84ca8ece148e686633ecacb68bc7a3","after":"0cf0805c4a65ad37fce791a3244843bda1f3dc9d","ref":"refs/heads/train-mps-cpu","pushedAt":"2024-09-15T20:18:30.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"linting and test fixes\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"linting and test fixes"}},{"before":"020fd82587b3f8c51f45b1b6037e0d639b95546e","after":"dfa3106f3f84ca8ece148e686633ecacb68bc7a3","ref":"refs/heads/train-mps-cpu","pushedAt":"2024-09-15T18:48:40.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"test removing torchrun\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"test removing torchrun"}},{"before":"325adacf9b8e4a641d201224bb5ebac9f8f40324","after":"4b7fbaf17464953b52830107e0371b067d939fc2","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-14T15:47:01.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, and use this information to choose the train profile which best matches the amount of vram.\nfor good measure, set nproc_per_node at the end as well in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"0d2ed87bb8a85e287ee295186a0f835eeab8688f","after":"325adacf9b8e4a641d201224bb5ebac9f8f40324","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-14T15:39:34.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, and use this information to choose the train profile which best matches the amount of vram.\nfor good measure, set nproc_per_node at the end as well in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"c2b46840e2f532ed8b1d6583af4e3bc50d85c68b","after":"0d2ed87bb8a85e287ee295186a0f835eeab8688f","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-13T20:26:49.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, and use this information to choose the train profile which best matches the amount of vram.\nfor good measure, set nproc_per_node at the end as well in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"4b00c5c0ed598d8e95a19e0a16f84c1828de3150","after":"c2b46840e2f532ed8b1d6583af4e3bc50d85c68b","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-13T20:07:42.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, and use this information to choose the train profile which best matches the amount of vram.\nfor good measure, set nproc_per_node at the end as well in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"1839fa79d40dea0b20f4dbd61111c8640c0766f7","after":"4b00c5c0ed598d8e95a19e0a16f84c1828de3150","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-13T20:05:35.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, and use this information to choose the train profile which best matches the amount of vram.\nfor good measure, set nproc_per_node at the end as well in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"bf7d96f45ab51b58cd4e22bef9cfddc6ad9e983c","after":"1839fa79d40dea0b20f4dbd61111c8640c0766f7","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-13T20:02:06.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, and use this information to choose the train profile which best matches the amount of vram.\nfor good measure, set nproc_per_node at the end as well in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"b5fdac033cbe382be8a1be26001f82b6d0192f16","after":"bf7d96f45ab51b58cd4e22bef9cfddc6ad9e983c","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-13T19:49:54.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, and use this information to choose the train profile which best matches the amount of vram.\nfor good measure, set nproc_per_node at the end as well in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"0050edbbe4295d48bfd820daf06c5f2b18f4de2b","after":"b5fdac033cbe382be8a1be26001f82b6d0192f16","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-13T19:42:16.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, and use this information to choose the train profile which best matches the amount of vram.\nfor good measure, set nproc_per_node at the end as well in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"c872b2970a21f19a44c2d75833fe4e9bfa82a5af","after":"0050edbbe4295d48bfd820daf06c5f2b18f4de2b","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-13T19:22:12.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, and use this information to choose the train profile which best matches the amount of vram.\nfor good measure, set nproc_per_node at the end as well in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"e4bd529b05480f1bc1da94382d67c5649daa92b7","after":"c872b2970a21f19a44c2d75833fe4e9bfa82a5af","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-13T19:21:08.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, and use this information to choose the train profile which best matches the amount of vram.\nfor good measure, set nproc_per_node at the end as well in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"858811391c4f16ab0c65dea716190c0a69a26db1","after":"e4bd529b05480f1bc1da94382d67c5649daa92b7","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-13T19:09:16.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, and use this information to choose the train profile which best matches the amount of vram.\nfor good measure, set nproc_per_node at the end as well in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"318ab2daf76b7e41b137df0e0c89f43fdcb7a3ee","after":"858811391c4f16ab0c65dea716190c0a69a26db1","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-13T18:43:02.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, and use this information to choose the train profile which best matches the amount of vram.\nfor good measure, set nproc_per_node at the end as well in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"9e2ff9daab2199ba167a6a05a7be9528134ab827","after":"318ab2daf76b7e41b137df0e0c89f43fdcb7a3ee","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-13T17:19:40.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, and use this information to choose the train profile which best matches the amount of vram.\nfor good measure, set nproc_per_node at the end as well in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}},{"before":"48b5c93afc8d48915a18f09b7a72f201d2f7dbd9","after":"9e2ff9daab2199ba167a6a05a7be9528134ab827","ref":"refs/heads/trainprof-auto","pushedAt":"2024-09-13T17:13:59.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cdoern","name":"Charlie Doern","path":"/cdoern","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22475215?s=80&v=4"},"commit":{"message":"feat: train profile auto selection\n\nadd auto selection for NVIDIA GPU train profiles. This uses torch under the hood\nto read the amount of GPUs, tally the total vram, and use this information to choose the train profile which best matches the amount of vram.\nfor good measure, set nproc_per_node at the end as well in case we found a profile which matches our vRAM amount but not our GPU amount\n\nSigned-off-by: Charlie Doern ","shortMessageHtmlLink":"feat: train profile auto selection"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0xN1QwMjowNDozOC4wMDAwMDBazwAAAAS4Ccpy","startCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0xN1QwMjowNDozOC4wMDAwMDBazwAAAAS4Ccpy","endCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0xM1QxNzoxMzo1OS4wMDAwMDBazwAAAAS1uUnh"}},"title":"Activity ยท cdoern/instructlab"}