From 00b1a0d511f3261fd4f66ae5b48cc6f8bfdab8c9 Mon Sep 17 00:00:00 2001 From: nyx-c-language Date: Sat, 29 Mar 2025 14:56:45 +0800 Subject: [PATCH 1/2] add the LRScheduler --- docs/api_guides/low_level/layers/learning_rate_scheduler.rst | 3 +++ .../api_guides/low_level/layers/learning_rate_scheduler_en.rst | 2 ++ 2 files changed, 5 insertions(+) diff --git a/docs/api_guides/low_level/layers/learning_rate_scheduler.rst b/docs/api_guides/low_level/layers/learning_rate_scheduler.rst index 0a18dde0f9a..df8dfa3cc1b 100644 --- a/docs/api_guides/low_level/layers/learning_rate_scheduler.rst +++ b/docs/api_guides/low_level/layers/learning_rate_scheduler.rst @@ -67,3 +67,6 @@ * :code:`CosineAnnealingWarmRestarts`: 余弦退火学习率,即学习率随 step 数变化呈余弦函数周期变化。 相关 API Reference 请参考 :ref:`cn_api_paddle_optimizer_lr_CosineAnnealingWarmRestarts` + +* :code:`LRScheduler`: 学习率策略基类,所有具体策略均继承自此类,需重写 get_lr() 方法实现自定义逻辑。 + 相关 API Reference 请参考 :ref:`_cn_api_paddle_optimizer_lr_LRScheduler` diff --git a/docs/api_guides/low_level/layers/learning_rate_scheduler_en.rst b/docs/api_guides/low_level/layers/learning_rate_scheduler_en.rst index 06e4a06870c..0ee56342fd9 100755 --- a/docs/api_guides/low_level/layers/learning_rate_scheduler_en.rst +++ b/docs/api_guides/low_level/layers/learning_rate_scheduler_en.rst @@ -48,3 +48,5 @@ The following content describes the APIs related to the learning rate scheduler: * :code:`LinearLR`: Linear decay. That is, the learning rate will be firstly multiplied by start_factor and linearly increase to end learning rate. For related API Reference please refer to :ref:`api_paddle_optimizer_lr_LinearLR` * :code:`CosineAnnealingWarmRestarts`: Cosine attenuation. It means the learning rate changes with the number of steps in the form of a cosine function. For related API Reference please refer to :ref:`api_paddle_optimizer_lr_CosineAnnealingWarmRestarts` + +* :code:`LRScheduler`: Learning rate scheduling base class. All specific learning rate scheduling strategies inherit from this class. For related API Reference please refer to :ref:`api_paddle_optimizer_lr_LRScheduler` From 56d9ad1d0d735933d18a12b541d82ddfbe7d9497 Mon Sep 17 00:00:00 2001 From: nyx-c-language Date: Mon, 31 Mar 2025 11:35:18 +0800 Subject: [PATCH 2/2] fix ref rendering --- .../api_guides/low_level/layers/learning_rate_scheduler.rst | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/api_guides/low_level/layers/learning_rate_scheduler.rst b/docs/api_guides/low_level/layers/learning_rate_scheduler.rst index df8dfa3cc1b..f2fb914bee6 100644 --- a/docs/api_guides/low_level/layers/learning_rate_scheduler.rst +++ b/docs/api_guides/low_level/layers/learning_rate_scheduler.rst @@ -60,13 +60,13 @@ 相关 API Reference 请参考 :ref:`cn_api_paddle_optimizer_lr_OneCycleLR` * :code:`CyclicLR`: 学习率根据指定的缩放策略以固定频率在最小和最大学习率之间进行循环。 - 相关 API Reference 请参考 :ref:`_cn_api_paddle_optimizer_lr_CyclicLR` + 相关 API Reference 请参考 :ref:`cn_api_paddle_optimizer_lr_CyclicLR` * :code:`LinearLR`: 学习率随 step 数线性增加到指定学习率。 - 相关 API Reference 请参考 :ref:`_cn_api_paddle_optimizer_lr_LinearLR` + 相关 API Reference 请参考 :ref:`cn_api_paddle_optimizer_lr_LinearLR` * :code:`CosineAnnealingWarmRestarts`: 余弦退火学习率,即学习率随 step 数变化呈余弦函数周期变化。 相关 API Reference 请参考 :ref:`cn_api_paddle_optimizer_lr_CosineAnnealingWarmRestarts` * :code:`LRScheduler`: 学习率策略基类,所有具体策略均继承自此类,需重写 get_lr() 方法实现自定义逻辑。 - 相关 API Reference 请参考 :ref:`_cn_api_paddle_optimizer_lr_LRScheduler` + 相关 API Reference 请参考 :ref:`cn_api_paddle_optimizer_lr_LRScheduler`