Skip to content

Commit

Permalink
fix en docs of some Apis (gradients, scope_guard, cuda_places, name_s…
Browse files Browse the repository at this point in the history
…cope, device_guard, load_program_state, scale, ParamAttr and WeightNormParamAttr) (#41604)

* Update scope_guard; test=document_fix

* gradients; test=document_fix

* gradients; test=document_fix

* name_scope; test=document_fix

* cpu_places; test=document_fix

* WeightNormParamAttr; test=document_fix

* cuda_places; test=document_fix

* load_program_state; test=document_fix

* device_guard; test=document_fix

* device_guard; test=document_fix

* ParamAttr; test=document_fix

* scale; test=document_fix

* scale; test=document_fix

* update code example;test=document_fix

Co-authored-by: Chen Long <1300851984@qq.com>
  • Loading branch information
Yilingyelu and TCChenlong authored Apr 25, 2022
1 parent 4185226 commit 6dd9dd3
Show file tree
Hide file tree
Showing 6 changed files with 30 additions and 24 deletions.
4 changes: 2 additions & 2 deletions python/paddle/fluid/backward.py
Original file line number Diff line number Diff line change
Expand Up @@ -2021,7 +2021,6 @@ def calc_gradient(targets, inputs, target_gradients=None, no_grad_set=None):
@framework.static_only
def gradients(targets, inputs, target_gradients=None, no_grad_set=None):
"""
:api_attr: Static Graph
Backpropagate the gradients of targets to inputs.
Expand All @@ -2042,8 +2041,9 @@ def gradients(targets, inputs, target_gradients=None, no_grad_set=None):
will be None.
Examples:
.. code-block:: python
:name: code-example
import paddle
import paddle.nn.functional as F
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/fluid/executor.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,6 @@ def _switch_scope(scope):
@signature_safe_contextmanager
def scope_guard(scope):
"""
:api_attr: Static Graph
This function switches scope through python `with` statement.
Scope records the mapping between variable names and variables ( :ref:`api_guide_Variable` ),
Expand All @@ -94,6 +93,7 @@ def scope_guard(scope):
None
Examples:
.. code-block:: python
import paddle
Expand Down
15 changes: 10 additions & 5 deletions python/paddle/fluid/framework.py
Original file line number Diff line number Diff line change
Expand Up @@ -729,7 +729,7 @@ def is_compiled_with_rocm():

def cuda_places(device_ids=None):
"""
**Note**:
Note:
For multi-card tasks, please use `FLAGS_selected_gpus` environment variable to set the visible GPU device.
The next version will fix the problem with `CUDA_VISIBLE_DEVICES` environment variable.
Expand All @@ -754,6 +754,7 @@ def cuda_places(device_ids=None):
list of paddle.CUDAPlace: Created GPU place list.
Examples:
.. code-block:: python
import paddle
Expand Down Expand Up @@ -874,6 +875,7 @@ def cpu_places(device_count=None):
list of paddle.CPUPlace: Created list of CPU places.
Examples:
.. code-block:: python
import paddle
Expand Down Expand Up @@ -993,7 +995,6 @@ def name(self):
@signature_safe_contextmanager
def name_scope(prefix=None):
"""
:api_attr: Static Graph
Generate hierarchical name prefix for the operators in Static Graph.
Expand All @@ -1006,6 +1007,7 @@ def name_scope(prefix=None):
prefix(str, optional): prefix. Default is none.
Examples:
.. code-block:: python
import paddle
Expand Down Expand Up @@ -6916,8 +6918,9 @@ def switch_device(device):
@signature_safe_contextmanager
def device_guard(device=None):
"""
**Notes**:
**The API only supports static mode.**
Note:
The API only supports static mode.
A context manager that specifies the device on which the OP will be placed.
Expand All @@ -6931,8 +6934,10 @@ def device_guard(device=None):
assigned devices.
Examples:
.. code-block:: python
# required: gpu
import paddle
paddle.enable_static()
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/fluid/io.py
Original file line number Diff line number Diff line change
Expand Up @@ -2154,7 +2154,6 @@ def set_var(var, ndarray):

def load_program_state(model_path, var_list=None):
"""
:api_attr: Static Graph
Load program state from local file
Expand All @@ -2169,6 +2168,7 @@ def load_program_state(model_path, var_list=None):
state_dict(dict): the dict store Parameter and optimizer information
Examples:
.. code-block:: python
import paddle
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/fluid/layers/nn.py
Original file line number Diff line number Diff line change
Expand Up @@ -11850,8 +11850,7 @@ def _elementwise_op(helper):

def scale(x, scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None):
"""
Scale operator.


Putting scale and bias to the input Tensor as following:

``bias_after_scale`` is True:
Expand All @@ -11876,6 +11875,7 @@ def scale(x, scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None):
Tensor: Output tensor of scale operator, with shape and data type same as input.

Examples:

.. code-block:: python

# scale as a float32 number
Expand Down
27 changes: 14 additions & 13 deletions python/paddle/fluid/param_attr.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,16 +30,17 @@

class ParamAttr(object):
"""
Create a object to represent the attribute of parameter. The attributes are:
name, initializer, learning rate, regularizer, trainable, gradient clip,
and model average.
Note:
``gradient_clip`` of ``ParamAttr`` HAS BEEN DEPRECATED since 2.0.
Please use ``need_clip`` in ``ParamAttr`` to speficiy the clip scope.
There are three clipping strategies: :ref:`api_paddle_nn_ClipGradByGlobalNorm` ,
:ref:`api_paddle_nn_ClipGradByNorm` , :ref:`api_paddle_nn_ClipGradByValue` .
Create a object to represent the attribute of parameter. The attributes are:
name, initializer, learning rate, regularizer, trainable, gradient clip,
and model average.
Parameters:
name (str, optional): The parameter's name. Default None, meaning that the name
would be created automatically.
Expand All @@ -63,6 +64,7 @@ class ParamAttr(object):
ParamAttr Object.
Examples:
.. code-block:: python
import paddle
Expand Down Expand Up @@ -213,24 +215,22 @@ def _to_kwargs(self, with_initializer=False):

class WeightNormParamAttr(ParamAttr):
r"""
:api_attr: Static Graph
Note:
Please use 'paddle.nn.utils.weight_norm' in dygraph mode.
Note:
``gradient_clip`` of ``ParamAttr`` HAS BEEN DEPRECATED since 2.0.
Please use ``need_clip`` in ``ParamAttr`` to speficiy the clip scope.
There are three clipping strategies: :ref:`api_paddle_nn_ClipGradByGlobalNorm` ,
:ref:`api_paddle_nn_ClipGradByNorm` , :ref:`api_paddle_nn_ClipGradByValue` .
Parameter of weight Norm. Weight Norm is a reparameterization of the weight vectors
in a neural network that decouples the magnitude of those weight vectors from
their direction. Weight Norm has been implemented as discussed in this
paper: `Weight Normalization: A Simple Reparameterization to Accelerate
Training of Deep Neural Networks
<https://arxiv.org/pdf/1602.07868.pdf>`_.
Note:
``gradient_clip`` of ``ParamAttr`` HAS BEEN DEPRECATED since 2.0.
Please use ``need_clip`` in ``ParamAttr`` to speficiy the clip scope.
There are three clipping strategies: :ref:`api_paddle_nn_ClipGradByGlobalNorm` ,
:ref:`api_paddle_nn_ClipGradByNorm` , :ref:`api_paddle_nn_ClipGradByValue` .
Args:
dim(int, optional): Dimension over which to compute the norm. Dim is a non-negative
Expand Down Expand Up @@ -258,6 +258,7 @@ class WeightNormParamAttr(ParamAttr):
need_clip (bool, optional): Whether the parameter gradient need to be cliped in optimizer. Default is True.
Examples:
.. code-block:: python
import paddle
Expand Down

0 comments on commit 6dd9dd3

Please sign in to comment.