Skip to content

[Docathon][Update Doc No.28] add the LRScheduler #7182

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Mar 31, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 5 additions & 2 deletions docs/api_guides/low_level/layers/learning_rate_scheduler.rst
Original file line number Diff line number Diff line change
Expand Up @@ -60,10 +60,13 @@
相关 API Reference 请参考 :ref:`cn_api_paddle_optimizer_lr_OneCycleLR`

* :code:`CyclicLR`: 学习率根据指定的缩放策略以固定频率在最小和最大学习率之间进行循环。
相关 API Reference 请参考 :ref:`_cn_api_paddle_optimizer_lr_CyclicLR`
相关 API Reference 请参考 :ref:`cn_api_paddle_optimizer_lr_CyclicLR`

* :code:`LinearLR`: 学习率随 step 数线性增加到指定学习率。
相关 API Reference 请参考 :ref:`_cn_api_paddle_optimizer_lr_LinearLR`
相关 API Reference 请参考 :ref:`cn_api_paddle_optimizer_lr_LinearLR`

* :code:`CosineAnnealingWarmRestarts`: 余弦退火学习率,即学习率随 step 数变化呈余弦函数周期变化。
相关 API Reference 请参考 :ref:`cn_api_paddle_optimizer_lr_CosineAnnealingWarmRestarts`

* :code:`LRScheduler`: 学习率策略基类,所有具体策略均继承自此类,需重写 get_lr() 方法实现自定义逻辑。
相关 API Reference 请参考 :ref:`cn_api_paddle_optimizer_lr_LRScheduler`
Original file line number Diff line number Diff line change
Expand Up @@ -48,3 +48,5 @@ The following content describes the APIs related to the learning rate scheduler:
* :code:`LinearLR`: Linear decay. That is, the learning rate will be firstly multiplied by start_factor and linearly increase to end learning rate. For related API Reference please refer to :ref:`api_paddle_optimizer_lr_LinearLR`

* :code:`CosineAnnealingWarmRestarts`: Cosine attenuation. It means the learning rate changes with the number of steps in the form of a cosine function. For related API Reference please refer to :ref:`api_paddle_optimizer_lr_CosineAnnealingWarmRestarts`

* :code:`LRScheduler`: Learning rate scheduling base class. All specific learning rate scheduling strategies inherit from this class. For related API Reference please refer to :ref:`api_paddle_optimizer_lr_LRScheduler`