You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the API,FAQ,Github Issue and AI community to get the answer.Have a nice day!
(此 ISSUE 为 PaddlePaddle Hackathon 活动的任务 ISSUE,更多详见PaddlePaddle Hackathon)
【任务说明】
任务标题:在 Paddle 中新增 AdaScale SGD
技术标签:深度学习框架,python,优化算法
任务难度:中等
详细描述: AdaScale SGD 一种大规模分布式训练中的LR schedule 策略, 被用来加速大batch 训练时的收敛速度,目前Paddle 中并没有实现。 本任务的目标是基于 Paddle.optimize.SGD 和 Paddle.distributed 等 Paddle 基础框架实现 Paddle 中的 AdaScale SGD 优化策略,收敛最大bathc szie、收敛精度两个指标上和论文对齐。
【提交内容】
任务 PR 到 Paddle
相关技术文档
任务单测文件
调用路径:paddle.optimizer.AdaScaleSGD
【技术要求】
了解 Paddle 动静态图下 Optimize pass过程、Paddle 分布式框架
熟练掌握 Python
The text was updated successfully, but these errors were encountered: