diff --git a/.github/ISSUE_TEMPLATE/1_documentation-issue.yml b/.github/ISSUE_TEMPLATE/1_documentation-issue.yml index f558d343ea7..35a37d5f8ff 100644 --- a/.github/ISSUE_TEMPLATE/1_documentation-issue.yml +++ b/.github/ISSUE_TEMPLATE/1_documentation-issue.yml @@ -7,7 +7,7 @@ body: attributes: value: > #### 请确认反馈的问题来自PaddlePaddle官网文档:https://www.paddlepaddle.org.cn/ 。 - + #### Before submitting a Documentation Issue, Please make sure that issue is related to https://www.paddlepaddle.org.cn/. - type: textarea @@ -28,16 +28,11 @@ body: description: | 请告诉我们,你希望如何改进这个文档。 Please tell us how you would like to improve this document. - + validations: required: false - + - type: markdown attributes: value: > 感谢你的贡献 🎉!Thanks for your contribution 🎉! - - - - - diff --git a/.github/ISSUE_TEMPLATE/2_eval-issue.yml b/.github/ISSUE_TEMPLATE/2_eval-issue.yml index 0d9af3bd18e..2cb07dfd957 100644 --- a/.github/ISSUE_TEMPLATE/2_eval-issue.yml +++ b/.github/ISSUE_TEMPLATE/2_eval-issue.yml @@ -7,7 +7,7 @@ body: attributes: value: > #### 此模板用于文档评估工作小组成员向飞桨团队反馈文档问题,如非工作小组成员,请使用「📚 文档」模板,感谢配合~ - + #### This template is used by members of the document evaluation working group to report documentation issues to the Paddle Team. If you are not a member of the working group, please use the "📚 Documentation Issue" template, thank you for your cooperation~ - type: textarea @@ -38,11 +38,11 @@ body: description: | 请评估本篇文档,是否按API文档推荐的结构书写,并且没有遗漏(无需写的部分可以省略)? 如有结构问题,请在以下文本框中说明。 - + API文档推荐的书写结构:API名称、API功能描述、API参数、API返回、API代码示例、API属性(class)、API方法(methods) validations: required: false - + - type: textarea id: eval3 attributes: @@ -56,7 +56,7 @@ body: - 没有死链、错字、错误标点等低级错误。 validations: required: false - + - type: textarea id: eval4 attributes: @@ -79,10 +79,8 @@ body: 如你已经发现了文档的错误并且判断其容易被修复,欢迎你为[Paddle](https://github.com/PaddlePaddle/Paddle/pulls)与[docs](https://github.com/PaddlePaddle/docs/pulls)提PR,直接修复此文档,并将PR链接贴在以下文本框中~ validations: required: true - + - type: markdown attributes: value: > 感谢你的贡献 🎉!Thanks for your contribution 🎉! - - diff --git a/.github/ISSUE_TEMPLATE/3_eval_distributed_DataParallel_docs-issue.yml b/.github/ISSUE_TEMPLATE/3_eval_distributed_DataParallel_docs-issue.yml index 69df305a475..f28e2199392 100644 --- a/.github/ISSUE_TEMPLATE/3_eval_distributed_DataParallel_docs-issue.yml +++ b/.github/ISSUE_TEMPLATE/3_eval_distributed_DataParallel_docs-issue.yml @@ -33,7 +33,7 @@ body: - 其他 validations: required: true - + - type: markdown attributes: value: > @@ -54,7 +54,7 @@ body: attributes: value: > ## 任务一体验评测 - + - type: textarea id: distributed_dp_eval1 attributes: @@ -111,7 +111,7 @@ body: - 对应文档是否描述清晰(如不清晰,需注明问题并提供文档链接): validations: required: true - + - type: textarea id: distributed_dp_eval5 attributes: @@ -125,7 +125,7 @@ body: - 对应文档是否描述清晰(如不清晰,需注明问题并提供文档链接): validations: required: true - + - type: textarea id: distributed_dp_eval6 attributes: @@ -191,7 +191,7 @@ body: attributes: value: > ## 任务二体验评测 - + - type: textarea id: distributed_dp_eval11 attributes: @@ -248,7 +248,7 @@ body: - 对应文档是否描述清晰(如不清晰,需注明问题并提供文档链接): validations: required: true - + - type: textarea id: distributed_dp_eval15 attributes: @@ -262,7 +262,7 @@ body: - 对应文档是否描述清晰(如不清晰,需注明问题并提供文档链接): validations: required: true - + - type: textarea id: distributed_dp_eval16 attributes: @@ -290,7 +290,7 @@ body: - 对应文档是否描述清晰(如不清晰,需注明问题并提供文档链接): validations: required: false - + - type: textarea id: distributed_dp_eval18 attributes: @@ -314,4 +314,3 @@ body: attributes: value: > 感谢你的贡献 🎉!Thanks for your contribution 🎉! - diff --git a/.github/ISSUE_TEMPLATE/4_eval_distributed_PS_docs-issue.yml b/.github/ISSUE_TEMPLATE/4_eval_distributed_PS_docs-issue.yml index 05f47036f12..bbfed3e7d46 100644 --- a/.github/ISSUE_TEMPLATE/4_eval_distributed_PS_docs-issue.yml +++ b/.github/ISSUE_TEMPLATE/4_eval_distributed_PS_docs-issue.yml @@ -43,7 +43,7 @@ body: attributes: value: > ## 任务一体验评测 - + - type: textarea id: distributed_ps_eval1 attributes: @@ -100,7 +100,7 @@ body: - 对应文档是否描述清晰(如不清晰,需注明问题并提供文档链接): validations: required: true - + - type: textarea id: distributed_ps_eval5 attributes: @@ -114,7 +114,7 @@ body: - 对应文档是否描述清晰(如不清晰,需注明问题并提供文档链接): validations: required: true - + - type: textarea id: distributed_ps_eval6 attributes: @@ -128,7 +128,7 @@ body: - 对应文档是否描述清晰(如不清晰,需注明问题并提供文档链接): validations: required: true - + - type: textarea id: distributed_ps_eval7 attributes: @@ -167,7 +167,7 @@ body: attributes: value: > ## 任务二体验评测 - + - type: textarea id: distributed_ps_eval11 attributes: @@ -224,7 +224,7 @@ body: - 对应文档是否描述清晰(如不清晰,需注明问题并提供文档链接): validations: required: true - + - type: textarea id: distributed_ps_eval15 attributes: @@ -238,7 +238,7 @@ body: - 对应文档是否描述清晰(如不清晰,需注明问题并提供文档链接): validations: required: true - + - type: textarea id: distributed_ps_eval16 attributes: @@ -276,4 +276,3 @@ body: attributes: value: > 感谢你的贡献 🎉!Thanks for your contribution 🎉! - diff --git a/.gitignore b/.gitignore index 10cc233f57f..e9b861afc24 100644 --- a/.gitignore +++ b/.gitignore @@ -7,4 +7,4 @@ build .vscode venv/ -*.pyc \ No newline at end of file +*.pyc diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index a018d2f2b27..d1293fb1fb9 100755 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -5,16 +5,16 @@ repos: - id: black files: \.py$ - repo: https://github.com/pre-commit/pre-commit-hooks - rev: v4.1.0 + rev: v4.4.0 hooks: - id: check-merge-conflict - id: check-symlinks - id: detect-private-key files: (?!.*paddle)^.*$ - id: end-of-file-fixer - files: \.md$|\.rst$ + types: [text] - id: trailing-whitespace - files: \.md$|\.rst$ + types: [text] - repo: https://github.com/Lucas-C/pre-commit-hooks rev: v1.5.1 hooks: @@ -43,4 +43,3 @@ repos: entry: python .pre-commit-hooks/convert_markdown_into_html.py language: system files: .+README(\.cn)?\.md$ - diff --git a/.pre-commit-hooks/convert_markdown_into_ipynb.sh b/.pre-commit-hooks/convert_markdown_into_ipynb.sh index dbcb1046d82..e19e642af78 100644 --- a/.pre-commit-hooks/convert_markdown_into_ipynb.sh +++ b/.pre-commit-hooks/convert_markdown_into_ipynb.sh @@ -6,4 +6,3 @@ for file in $@ ; do exit 1 fi done - diff --git a/ci_scripts/CAPItools/requirements.txt b/ci_scripts/CAPItools/requirements.txt index 13a2a3bf3d7..36b8c3522de 100644 --- a/ci_scripts/CAPItools/requirements.txt +++ b/ci_scripts/CAPItools/requirements.txt @@ -1,2 +1,2 @@ robotpy-cppheaderparser==5.1.0 -# paddle \ No newline at end of file +# paddle diff --git a/ci_scripts/api_white_list.txt b/ci_scripts/api_white_list.txt index b369fbb0f28..aeccc4b40b5 100644 --- a/ci_scripts/api_white_list.txt +++ b/ci_scripts/api_white_list.txt @@ -21,4 +21,4 @@ paddle/utils/cpp_extension/setup_cn.rst paddle/utils/cpp_extension/CppExtension_cn.rst paddle/utils/cpp_extension/CUDAExtension_cn.rst paddle/hub/Overview_cn.rst -paddle/incubate/autograd/Overview_cn.rst \ No newline at end of file +paddle/incubate/autograd/Overview_cn.rst diff --git a/ci_scripts/check_api_cn.sh b/ci_scripts/check_api_cn.sh index 668f9047e32..10618588a5a 100644 --- a/ci_scripts/check_api_cn.sh +++ b/ci_scripts/check_api_cn.sh @@ -16,12 +16,12 @@ function filter_cn_api_files() { local need_check_files="" for file in `echo $git_files`;do grep 'code-block:: python' ../docs/$file > /dev/null - if [ $? -eq 0 ] ;then + if [ $? -eq 0 ] ;then api_file=`echo $file | sed 's#api/##g'` grep -w "${api_file}" ${DIR_PATH}/api_white_list.txt > /dev/null if [ $? -ne 0 ];then need_check_files="${need_check_files} $file" - fi + fi fi done if [[ "$__resultvar" ]] ; then @@ -40,4 +40,4 @@ python check_copy_from_parsed_into_sample_code.py "${OUTPUTDIR}/zh/${VERSIONSTR} if [ $? -ne 0 ];then echo "ERROR: Exist COPY-FROM has not been parsed into sample code, please check COPY-FROM in the above files" exit 1 -fi \ No newline at end of file +fi diff --git a/ci_scripts/check_api_docs_en.sh b/ci_scripts/check_api_docs_en.sh index a68b76b30a8..a0e161c3825 100644 --- a/ci_scripts/check_api_docs_en.sh +++ b/ci_scripts/check_api_docs_en.sh @@ -18,4 +18,4 @@ jsonfn=$1 output_path=$2 need_check_api_py_files="${3}" echo "RUN System Message MARNING/ERROR Check" -check_system_message $jsonfn $output_path "${need_check_api_py_files}" \ No newline at end of file +check_system_message $jsonfn $output_path "${need_check_api_py_files}" diff --git a/ci_scripts/check_api_label_cn.sh b/ci_scripts/check_api_label_cn.sh index 2d6b548dac7..23a888999a1 100644 --- a/ci_scripts/check_api_label_cn.sh +++ b/ci_scripts/check_api_label_cn.sh @@ -21,4 +21,4 @@ python check_api_label_cn.py ${DOCROOT} ${APIROOT} $all_git_files if [ $? -ne 0 ];then echo "ERROR: api_label is not correct, please check api_label in the above files" exit 1 -fi \ No newline at end of file +fi diff --git a/ci_scripts/check_api_parameters.sh b/ci_scripts/check_api_parameters.sh index b0ab6477d74..e0756043afc 100644 --- a/ci_scripts/check_api_parameters.sh +++ b/ci_scripts/check_api_parameters.sh @@ -12,7 +12,7 @@ function filter_cn_api_files() { local need_check_files="" for file in `echo $git_files`;do echo "$file" | grep '.*\.rst$' > /dev/null - if [ $? -eq 0 ] ;then + if [ $? -eq 0 ] ;then need_check_files="${need_check_files} $file" fi done diff --git a/ci_scripts/checkapproval.sh b/ci_scripts/checkapproval.sh index 1bc8996f29a..36220e78db8 100644 --- a/ci_scripts/checkapproval.sh +++ b/ci_scripts/checkapproval.sh @@ -25,4 +25,3 @@ for API_FILE in ${API_FILES[*]}; do exit 1 fi done - diff --git a/ci_scripts/ci_start.sh b/ci_scripts/ci_start.sh index d045c168bed..d137217f97f 100644 --- a/ci_scripts/ci_start.sh +++ b/ci_scripts/ci_start.sh @@ -36,7 +36,7 @@ fi export PADDLE_WHL echo "PADDLE_WHL=${PADDLE_WHL}" set -x - + # 2 build all the Chinese and English docs, and upload them. Controlled with Env BUILD_DOC and UPLOAD_DOC PREVIEW_URL_PROMPT="ipipe_log_param_preview_url: None" if [ "${BUILD_DOC}" = "true" ] && [ -x /usr/local/bin/sphinx-build ] ; then @@ -46,7 +46,7 @@ if [ "${BUILD_DOC}" = "true" ] && [ -x /usr/local/bin/sphinx-build ] ; then if [ $? -ne 0 ];then exit 1 fi - + is_shell_attribute_set x xdebug_setted=$? if [ $xdebug_setted ] ; then diff --git a/ci_scripts/ci_start_en.sh b/ci_scripts/ci_start_en.sh index 0c139132c70..05c80d4b67b 100644 --- a/ci_scripts/ci_start_en.sh +++ b/ci_scripts/ci_start_en.sh @@ -42,14 +42,14 @@ echo "PADDLE_WHL=${PADDLE_WHL}" # 2 build all the Chinese and English docs, and upload them. Controlled with Env BUILD_DOC and UPLOAD_DOC PREVIEW_URL_PROMPT="ipipe_log_param_preview_url: None" if [ "${BUILD_DOC}" = "true" ] && [ -x /usr/local/bin/sphinx-build ] ; then - apt update - apt install -y libpython3.7 + apt update + apt install -y libpython3.7 apt --fix-broken install -y libssl1.0 /bin/bash -x ${DIR_PATH}/gendoc.sh if [ $? -ne 0 ] ; then exit 1 fi - + is_shell_attribute_set x xdebug_setted=$? if [ $xdebug_setted ] ; then @@ -67,7 +67,7 @@ if [ "${BUILD_DOC}" = "true" ] && [ -x /usr/local/bin/sphinx-build ] ; then if [ $xdebug_setted ] ; then set -x fi - + # https://cloud.baidu.com/doc/XLY/s/qjwvy89pc#%E7%B3%BB%E7%BB%9F%E5%8F%82%E6%95%B0%E5%A6%82%E4%B8%8B # ${AGILE_PIPELINE_ID}-${AGILE_PIPELINE_BUILD_ID}" if [ "${UPLOAD_DOC}" = "true" ] ; then @@ -124,7 +124,7 @@ fi if [ "${need_check_api_py_files}" = "" ] ; then echo "api python file list is empty, skip check system message in docs" else - echo 'need check api pyhon file: ', $need_check_api_py_files + echo 'need check api pyhon file: ', $need_check_api_py_files /bin/bash ${DIR_PATH}/check_api_docs_en.sh ${jsonfn} ${OUTPUTDIR}/en/${VERSIONSTR}/api/ "${need_check_api_py_files}" if [ $? -ne 0 ]; then echo 'Docs Style Check is failed, please check the style in the above docs' diff --git a/ci_scripts/gendoc.sh b/ci_scripts/gendoc.sh index 79c9fdf04f3..70bb2c66210 100755 --- a/ci_scripts/gendoc.sh +++ b/ci_scripts/gendoc.sh @@ -39,7 +39,7 @@ rm $tmp_fifofile # 清空管道内容 # for循环 往 fifo管道文件中写入$thread个空行 for ((i=0;i<$thread;i++));do - echo + echo done >&6 @@ -54,7 +54,7 @@ for lang in en zh ; do if [ "${lang}" = "zh" ] ; then INDEXFILE="${OUTPUTDIR}/${lang}/${VERSIONSTR}/index_cn.html" fi - if [ ! -f ${INDEXFILE} ] ; then + if [ ! -f ${INDEXFILE} ] ; then /usr/local/bin/sphinx-build -b ${OUTPUTFORMAT} -j ${sphinx_thread} -d /var/doctrees -c ${CONFIGDIR}/${lang} ${DOCROOT} ${OUTPUTDIR}/${lang}/${VERSIONSTR} fi diff --git a/ci_scripts/hooks/post-doc-compile.sh b/ci_scripts/hooks/post-doc-compile.sh index 5aeced52228..af10c562771 100755 --- a/ci_scripts/hooks/post-doc-compile.sh +++ b/ci_scripts/hooks/post-doc-compile.sh @@ -19,5 +19,3 @@ VERSIONSTR=${2} SCRIPT_DIR="$( cd "$( dirname "$0" )" && pwd )" python ${SCRIPT_DIR}/post_filter_htmls.py ${OUTPUTDIR}/en/${VERSIONSTR}/api/ - - diff --git a/ci_scripts/utils.sh b/ci_scripts/utils.sh index 8482e384e3c..a71cbf05259 100644 --- a/ci_scripts/utils.sh +++ b/ci_scripts/utils.sh @@ -104,7 +104,7 @@ function find_all_cn_api_files_modified_by_pr() { if [ $? -ne 0 ] ; then remotename=origin fi - local need_check_cn_doc_files=`git diff --name-only --diff-filter=ACMR ${remotename}/${BRANCH} | grep "docs/api/paddle/.*_cn.rst" | sed 's#docs/##g'` + local need_check_cn_doc_files=`git diff --name-only --diff-filter=ACMR ${remotename}/${BRANCH} | grep "docs/api/paddle/.*_cn.rst" | sed 's#docs/##g'` if [[ "$__resultvar" ]] ; then eval $__resultvar="$need_check_cn_doc_files" else @@ -118,7 +118,7 @@ function find_all_api_py_files_modified_by_pr() { if [ $? -ne 0 ] ; then remotename=origin fi - - local need_check_api_py_files=`git diff --name-only --diff-filter=ACMR ${remotename}/${BRANCH} | grep "python/paddle/.*.py" | sed 's#docs/##g'` + + local need_check_api_py_files=`git diff --name-only --diff-filter=ACMR ${remotename}/${BRANCH} | grep "python/paddle/.*.py" | sed 's#docs/##g'` echo "$need_check_api_py_files" -} \ No newline at end of file +} diff --git a/docs-build.sh b/docs-build.sh index f9d1c6b6b1e..47b16dbdc5e 100644 --- a/docs-build.sh +++ b/docs-build.sh @@ -1,11 +1,11 @@ #!/bin/bash - -SELFNAME=$0 + +SELFNAME=$0 SHORT=f:p:w:hx: LONG=docs-dir:,paddle-dir:,paddle-whl:,https-proxy: OPTIND=1 - + show_help() { cat < [-p ] @@ -19,7 +19,7 @@ Options: -x Set https_proxy HELP_HELP_HELP } - + while getopts $SHORT opt; do case "$opt" in f) @@ -119,4 +119,3 @@ if ! [ "$PADDLE_DIR" = '' ]; then /root/fluiddoc-gendoc.sh /whls/${WHL_FN} exit 0 fi - diff --git a/docs/api/api_aliases.ini b/docs/api/api_aliases.ini index e86f2aedb75..4ea712e03a3 100644 --- a/docs/api/api_aliases.ini +++ b/docs/api/api_aliases.ini @@ -4,4 +4,4 @@ help_zh=左侧是target name,右侧是origin name, 如paddle.device.cuda.Eve [en] paddle.device.cuda.Event=paddle.fluid.libpaddle.CUDAEvent paddle.device.cuda.Stream=paddle.fluid.libpaddle.CUDAStream -paddle.no_grad=paddle.fluid.dygraph.base.no_grad_ \ No newline at end of file +paddle.no_grad=paddle.fluid.dygraph.base.no_grad_ diff --git a/docs/api/paddle/hypot_cn.rst b/docs/api/paddle/hypot_cn.rst index 5cbb1ad5a87..d6d29711d48 100644 --- a/docs/api/paddle/hypot_cn.rst +++ b/docs/api/paddle/hypot_cn.rst @@ -9,11 +9,11 @@ hypot `hypot` 函数对于给定直角三角形直角边 `x`, `y` 实现斜边长度求解的计算; .. math:: - out= \sqrt{x^2 + y^2} + out= \sqrt{x^2 + y^2} 参数 :::::::::: - - **x** (Tensor) – 输入Tensor,它的数据类型可以是 float32,float64, int32, int64。 + - **x** (Tensor) – 输入 Tensor,它的数据类型可以是 float32,float64, int32, int64。 - **y** (Tensor) – 输入 Tensor,它的数据类型可以是 float32,float64,int32, int64。 - **name** (str,可选) - 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为 None。 返回 diff --git a/docs/api/paddle/sparse/isnan_cn.rst b/docs/api/paddle/sparse/isnan_cn.rst index 2d7a8de7534..8d793c64c6f 100644 --- a/docs/api/paddle/sparse/isnan_cn.rst +++ b/docs/api/paddle/sparse/isnan_cn.rst @@ -20,4 +20,4 @@ isnan 代码示例 ::::::::: -COPY-FROM: paddle.sparse.isnan \ No newline at end of file +COPY-FROM: paddle.sparse.isnan diff --git a/docs/api_guides/low_level/layers/tensor.rst b/docs/api_guides/low_level/layers/tensor.rst index c4d7709bd82..b8f209ae635 100644 --- a/docs/api_guides/low_level/layers/tensor.rst +++ b/docs/api_guides/low_level/layers/tensor.rst @@ -9,7 +9,7 @@ Fluid 中使用两种数据结构来承载数据,分别是 `Tensor 和 LoD_Ten 下面介绍这两种数据的相关操作。 Tensor -======= +====== 1. create_tensor --------------------- diff --git a/docs/api_guides/low_level/layers/tensor_en.rst b/docs/api_guides/low_level/layers/tensor_en.rst index d188b62d131..9f62acb452e 100755 --- a/docs/api_guides/low_level/layers/tensor_en.rst +++ b/docs/api_guides/low_level/layers/tensor_en.rst @@ -9,7 +9,7 @@ There are two data structures used in Fluid to host the data, namely `Tensor and Now let's take a closer look at the operations related to these two types of data. Tensor -======= +====== 1. create_tensor --------------------- diff --git a/docs/design/concepts/images/parallel_executor_overview.dot b/docs/design/concepts/images/parallel_executor_overview.dot index 40753cb1405..f013bb45602 100644 --- a/docs/design/concepts/images/parallel_executor_overview.dot +++ b/docs/design/concepts/images/parallel_executor_overview.dot @@ -22,7 +22,7 @@ digraph G { scale_loss_0 [label="scale_loss_gradient\nGPU0", shape=box] loss_g0 [label="loss_grad\nGPU0"] scale_loss_0->loss_g0 - + fc_g_0 [label="w_grad\nGPU0", shape=box] loss0 -> fc_g_0 loss_g0 -> fc_g_0 @@ -40,7 +40,7 @@ digraph G { scale_loss_1 [label="scale_loss_gradient\nGPU1", shape=box] loss_g1 [label="loss_grad\nGPU1"] scale_loss_1->loss_g1 - + fc_g_1 [label="w_grad\nGPU1", shape=box] loss1 -> fc_g_1 loss_g1 -> fc_g_1 diff --git a/docs/design/dynamic_rnn/rnn.dot b/docs/design/dynamic_rnn/rnn.dot index c1141cd9c98..92440d347c7 100644 --- a/docs/design/dynamic_rnn/rnn.dot +++ b/docs/design/dynamic_rnn/rnn.dot @@ -1,5 +1,5 @@ digraph G { - label = "simple RNN implementation" + label = "simple RNN implementation" ranksep=2; diff --git a/docs/design/mkldnn/acquire_api/scripts/acquire.dot b/docs/design/mkldnn/acquire_api/scripts/acquire.dot index 9bbcf939cf8..65e508fead7 100644 --- a/docs/design/mkldnn/acquire_api/scripts/acquire.dot +++ b/docs/design/mkldnn/acquire_api/scripts/acquire.dot @@ -1,4 +1,4 @@ -digraph { +digraph { rankdir=LR weight=0.5 concentrate=true @@ -7,7 +7,7 @@ digraph { nodesep=1 node[width=4.4,shape=box] - + Node0x490c380 [shape=record,label="SoftmaxMKLDNNKernel::Compute()\l"]; Node0x4ab38f0 [shape=record,label="MKLDNNActivationKernel::Compute()\l"]; @@ -48,6 +48,6 @@ Node0x4b2e4f0 -> Node0x496cfc0 Node0x4ab38f0 -> Node0x49164c0 Node0x4ab38f0 -> Dst Node0x4ab38f0 -> Node0x491bca0 -Node0x4ab38f0 -> Node0x4b2e4f0[style="bold"] +Node0x4ab38f0 -> Node0x4b2e4f0[style="bold"] } diff --git a/docs/design/mkldnn/caching/scripts/cache.dot b/docs/design/mkldnn/caching/scripts/cache.dot index a921c77233e..ed235c9be0f 100644 --- a/docs/design/mkldnn/caching/scripts/cache.dot +++ b/docs/design/mkldnn/caching/scripts/cache.dot @@ -34,6 +34,6 @@ digraph Q { } -// For DefaultSessionID Key is having TID inside, for anything else eg. clearing mode , named session ID. no TID in key. ParallelExecutor is workign in default mode +// For DefaultSessionID Key is having TID inside, for anything else eg. clearing mode , named session ID. no TID in key. ParallelExecutor is workign in default mode // // diff --git a/docs/design/mkldnn/caching/scripts/get_blob.dot b/docs/design/mkldnn/caching/scripts/get_blob.dot index a5c66a9d02b..5c31491dca8 100644 --- a/docs/design/mkldnn/caching/scripts/get_blob.dot +++ b/docs/design/mkldnn/caching/scripts/get_blob.dot @@ -30,6 +30,6 @@ A4 -> A5 [label="yes"] A5 -> B1 B1 -> B3 [label="yes"] B1 -> B [label="no"] -B3 -> B4 +B3 -> B4 } diff --git a/docs/design/mkldnn/caching/scripts/set_blob.dot b/docs/design/mkldnn/caching/scripts/set_blob.dot index a32ea9b3017..9c9b6904700 100644 --- a/docs/design/mkldnn/caching/scripts/set_blob.dot +++ b/docs/design/mkldnn/caching/scripts/set_blob.dot @@ -46,8 +46,8 @@ F -> A A5 -> B1 B1 -> B2 [label="no"] -B2 -> B3 +B2 -> B3 B1 -> B3 [label="yes"] -B3 -> B4 +B3 -> B4 } diff --git a/docs/design/mkldnn/data_transformation/scripts/data_transform.dot b/docs/design/mkldnn/data_transformation/scripts/data_transform.dot index 5e0a03bf7c7..da9e3904607 100644 --- a/docs/design/mkldnn/data_transformation/scripts/data_transform.dot +++ b/docs/design/mkldnn/data_transformation/scripts/data_transform.dot @@ -4,7 +4,7 @@ digraph Q { rankdir=LR node[shape=box] - TransData[label="TransformData"] + TransData[label="TransformData"] toMKLDNN[label="Label Tensor as MKL-DNN"] fromMKLDNN[label="TransDataLayoutFromMKLDNN"] innerMKLDNN[label= {MKL-DNN Reorder}>] @@ -28,19 +28,18 @@ digraph Q { } - cpuop -> TransData[label="Tensor ", style=bold] + cpuop -> TransData[label="Tensor ", style=bold] mkldnnop -> TransData[label="MKL-DNN Tensor ", color="black:invis:black"] mkldnnop -> innerMKLDNN[label="MKL-DNN Tensor"] - toMKLDNN -> mkldnnop2[style=bold, label="MKL-DNN Tensor "] + toMKLDNN -> mkldnnop2[style=bold, label="MKL-DNN Tensor "] innerMKLDNN -> cpuop2[label="Tensor ", color="black:invis:black"] - innerMKLDNN -> fetchop[label="Tensor "] + innerMKLDNN -> fetchop[label="Tensor "] subgraph cluster_Trans { label="Transformation of Tensors" style=dotted TransData -> toMKLDNN[style=bold] TransData -> fromMKLDNN[color="black:invis:black"] fromMKLDNN -> innerMKLDNN[color="black:invis:black"] - + } } - diff --git a/docs/design/mkldnn/gru/images/different_tensor_memory_arrangement.svg b/docs/design/mkldnn/gru/images/different_tensor_memory_arrangement.svg index 552f24881fb..b4eb26318ce 100644 --- a/docs/design/mkldnn/gru/images/different_tensor_memory_arrangement.svg +++ b/docs/design/mkldnn/gru/images/different_tensor_memory_arrangement.svg @@ -1,3 +1,3 @@ -
i1Ro1
i1Ro1
i1Ro2
i1Ro2
i1Uo1
i1Uo1
i1Uo2
i1Uo2
PP WeightH:
PP WeightH:
oneDNN:
oneDNN:
i2Ro1
i2Ro1
i2Ro2
i2Ro2
i2Uo1
i2Uo1
i2Uo2
i2Uo2
i1Oo1
i1Oo1
i1Oo2
i1Oo2
i2Oo1
i2Oo1
i2Oo2
i2Oo2
Reset gate
Reset gate
Update gate
Update gate
Output gate
Output gate
I
I
O
O
O
O
I
I
+
+
i1Ro1
i1Ro1
i1Ro2
i1Ro2
i1Uo1
i1Uo1
i1Uo2
i1Uo2
i2Ro1
i2Ro1
i2Ro2
i2Ro2
i2Uo1
i2Uo1
i2Uo2
i2Uo2
i1Oo1
i1Oo1
i1Oo2
i1Oo2
i2Oo1
i2Oo1
i2Oo2
i2Oo2
Real memory arrangement:
Real memory arrangement:
i1Ro1
i1Ro1
i1Ro2
i1Ro2
i1Uo1
i1Uo1
i1Uo2
i1Uo2
i2Ro1
i2Ro1
i2Ro2
i2Ro2
i2Uo1
i2Uo1
i2Uo2
i2Uo2
i1Oo1
i1Oo1
i1Oo2
i1Oo2
i2Oo1
i2Oo1
i2Oo2
i2Oo2
Reset gate
Reset gate
Update gate
Update gate
Output gate
Output gate
O
O
I
I
i1Ro1
i1Ro1
i1Ro2
i1Ro2
i1Uo1
i1Uo1
i1Uo2
i1Uo2
i2Ro1
i2Ro1
i2Ro2
i2Ro2
i2Uo1
i2Uo1
i2Uo2
i2Uo2
i1Oo1
i1Oo1
i1Oo2
i1Oo2
i2Oo1
i2Oo1
i2Oo2
i2Oo2
Real memory arrangement:
Real memory arrangement:
Viewer does not support full SVG 1.1
\ No newline at end of file +
i1Ro1
i1Ro1
i1Ro2
i1Ro2
i1Uo1
i1Uo1
i1Uo2
i1Uo2
PP WeightH:
PP WeightH:
oneDNN:
oneDNN:
i2Ro1
i2Ro1
i2Ro2
i2Ro2
i2Uo1
i2Uo1
i2Uo2
i2Uo2
i1Oo1
i1Oo1
i1Oo2
i1Oo2
i2Oo1
i2Oo1
i2Oo2
i2Oo2
Reset gate
Reset gate
Update gate
Update gate
Output gate
Output gate
I
I
O
O
O
O
I
I
+
+
i1Ro1
i1Ro1
i1Ro2
i1Ro2
i1Uo1
i1Uo1
i1Uo2
i1Uo2
i2Ro1
i2Ro1
i2Ro2
i2Ro2
i2Uo1
i2Uo1
i2Uo2
i2Uo2
i1Oo1
i1Oo1
i1Oo2
i1Oo2
i2Oo1
i2Oo1
i2Oo2
i2Oo2
Real memory arrangement:
Real memory arrangement:
i1Ro1
i1Ro1
i1Ro2
i1Ro2
i1Uo1
i1Uo1
i1Uo2
i1Uo2
i2Ro1
i2Ro1
i2Ro2
i2Ro2
i2Uo1
i2Uo1
i2Uo2
i2Uo2
i1Oo1
i1Oo1
i1Oo2
i1Oo2
i2Oo1
i2Oo1
i2Oo2
i2Oo2
Reset gate
Reset gate
Update gate
Update gate
Output gate
Output gate
O
O
I
I
i1Ro1
i1Ro1
i1Ro2
i1Ro2
i1Uo1
i1Uo1
i1Uo2
i1Uo2
i2Ro1
i2Ro1
i2Ro2
i2Ro2
i2Uo1
i2Uo1
i2Uo2
i2Uo2
i1Oo1
i1Oo1
i1Oo2
i1Oo2
i2Oo1
i2Oo1
i2Oo2
i2Oo2
Real memory arrangement:
Real memory arrangement:
Viewer does not support full SVG 1.1
diff --git a/docs/design/mkldnn/gru/images/eq_origin_mode_false.svg b/docs/design/mkldnn/gru/images/eq_origin_mode_false.svg index 7fcc392533e..2474ce3a622 100644 --- a/docs/design/mkldnn/gru/images/eq_origin_mode_false.svg +++ b/docs/design/mkldnn/gru/images/eq_origin_mode_false.svg @@ -133,4 +133,4 @@ - \ No newline at end of file + diff --git a/docs/design/mkldnn/gru/images/eq_origin_mode_true.svg b/docs/design/mkldnn/gru/images/eq_origin_mode_true.svg index 9f699fc721d..66c7792d420 100644 --- a/docs/design/mkldnn/gru/images/eq_origin_mode_true.svg +++ b/docs/design/mkldnn/gru/images/eq_origin_mode_true.svg @@ -133,4 +133,4 @@ - \ No newline at end of file + diff --git a/docs/design/mkldnn/gru/images/eq_proof.svg b/docs/design/mkldnn/gru/images/eq_proof.svg index 747e40847ab..6ee6ebc0ac4 100644 --- a/docs/design/mkldnn/gru/images/eq_proof.svg +++ b/docs/design/mkldnn/gru/images/eq_proof.svg @@ -233,4 +233,4 @@ - \ No newline at end of file + diff --git a/docs/design/mkldnn/gru/images/fc_gru_fuse_pass.svg b/docs/design/mkldnn/gru/images/fc_gru_fuse_pass.svg index 27e34ce1008..7489bc837d5 100644 --- a/docs/design/mkldnn/gru/images/fc_gru_fuse_pass.svg +++ b/docs/design/mkldnn/gru/images/fc_gru_fuse_pass.svg @@ -1,3 +1,3 @@ -
FC
FC
GRU
GRU
WeightsH
WeightsH
FC Bias
FC Bias
WeightsX
WeightsX
GRU Bias
GRU Bias
Fusion GRU
Fusion GRU
(FC + GRU) Bias
(FC + GRU) Bias
WeightsH
WeightsH
WeightsX
WeightsX
Viewer does not support full SVG 1.1
\ No newline at end of file +
FC
FC
GRU
GRU
WeightsH
WeightsH
FC Bias
FC Bias
WeightsX
WeightsX
GRU Bias
GRU Bias
Fusion GRU
Fusion GRU
(FC + GRU) Bias
(FC + GRU) Bias
WeightsH
WeightsH
WeightsX
WeightsX
Viewer does not support full SVG 1.1
diff --git a/docs/design/mkldnn/gru/images/input_difference.svg b/docs/design/mkldnn/gru/images/input_difference.svg index e630fbf3917..99e250acd6f 100644 --- a/docs/design/mkldnn/gru/images/input_difference.svg +++ b/docs/design/mkldnn/gru/images/input_difference.svg @@ -1,3 +1,3 @@ -
W1
W1
W2
W2
W3
W3
W4
W4
W5
W5
W6
W6
W7
W7
W8
W8
W9
W9
W10
W10
W11
W11
W12
W12
W1
W1
W2
W2
W3
W3
W4
W4
W5
W5
W6
W6
W7
W7
W8
W8
W9
W9
W10
W10
W11
W11
W12
W12
T
T
N
N
LoD info:
LoD info:
0
0
3
3
7
7
9
9
12
12
Data:
Data:
PP Tensor:
PP Tensor:
oneDNN:
oneDNN:
Viewer does not support full SVG 1.1
\ No newline at end of file +
W1
W1
W2
W2
W3
W3
W4
W4
W5
W5
W6
W6
W7
W7
W8
W8
W9
W9
W10
W10
W11
W11
W12
W12
W1
W1
W2
W2
W3
W3
W4
W4
W5
W5
W6
W6
W7
W7
W8
W8
W9
W9
W10
W10
W11
W11
W12
W12
T
T
N
N
LoD info:
LoD info:
0
0
3
3
7
7
9
9
12
12
Data:
Data:
PP Tensor:
PP Tensor:
oneDNN:
oneDNN:
Viewer does not support full SVG 1.1
diff --git a/docs/design/mkldnn/gru/images/input_is_reverse.svg b/docs/design/mkldnn/gru/images/input_is_reverse.svg index dbeaf80a517..6e21d2c5047 100644 --- a/docs/design/mkldnn/gru/images/input_is_reverse.svg +++ b/docs/design/mkldnn/gru/images/input_is_reverse.svg @@ -1,3 +1,3 @@ -
W1
W1
W2
W2
W3
W3
W4
W4
W5
W5
W6
W6
W7
W7
W8
W8
W9
W9
W10
W10
W11
W11
W12
W12
W1
W1
W2
W2
W3
W3
W4
W4
W5
W5
W6
W6
W7
W7
W8
W8
W9
W9
W10
W10
W11
W11
W12
W12
T
T
N
N
LoD info:
LoD info:
0
0
3
3
7
7
9
9
12
12
Data:
Data:
PP Tensor:
PP Tensor:
oneDNN:
oneDNN:
NTC
NTC
W1
W1
W2
W2
W3
W3
W4
W4
W5
W5
W6
W6
W7
W7
W8
W8
W9
W9
W10
W10
W11
W11
W12
W12
T
T
N
N
is_reverse == true
is_reverse == true
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
is_reverse == false
is_reverse == false
W1
W1
W2
W2
W3
W3
W4
W4
W5
W5
W6
W6
W7
W7
W8
W8
W9
W9
W10
W10
W11
W11
W12
W12
N
N
T
T
TNC
TNC
N
N
T
T
0
0
0
0
0
0
0
0
W1
W1
W2
W2
W3
W3
W4
W4
W5
W5
W6
W6
W7
W7
W8
W8
W9
W9
W10
W10
W11
W11
W12
W12
0
0
0
0
0
0
0
0
Viewer does not support full SVG 1.1
\ No newline at end of file +
W1
W1
W2
W2
W3
W3
W4
W4
W5
W5
W6
W6
W7
W7
W8
W8
W9
W9
W10
W10
W11
W11
W12
W12
W1
W1
W2
W2
W3
W3
W4
W4
W5
W5
W6
W6
W7
W7
W8
W8
W9
W9
W10
W10
W11
W11
W12
W12
T
T
N
N
LoD info:
LoD info:
0
0
3
3
7
7
9
9
12
12
Data:
Data:
PP Tensor:
PP Tensor:
oneDNN:
oneDNN:
NTC
NTC
W1
W1
W2
W2
W3
W3
W4
W4
W5
W5
W6
W6
W7
W7
W8
W8
W9
W9
W10
W10
W11
W11
W12
W12
T
T
N
N
is_reverse == true
is_reverse == true
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
is_reverse == false
is_reverse == false
W1
W1
W2
W2
W3
W3
W4
W4
W5
W5
W6
W6
W7
W7
W8
W8
W9
W9
W10
W10
W11
W11
W12
W12
N
N
T
T
TNC
TNC
N
N
T
T
0
0
0
0
0
0
0
0
W1
W1
W2
W2
W3
W3
W4
W4
W5
W5
W6
W6
W7
W7
W8
W8
W9
W9
W10
W10
W11
W11
W12
W12
0
0
0
0
0
0
0
0
Viewer does not support full SVG 1.1
diff --git a/docs/design/mkldnn/inplace/scripts/inplace.dot b/docs/design/mkldnn/inplace/scripts/inplace.dot index 7a5d22bb292..99d8818db85 100644 --- a/docs/design/mkldnn/inplace/scripts/inplace.dot +++ b/docs/design/mkldnn/inplace/scripts/inplace.dot @@ -3,21 +3,21 @@ digraph G { e1[label="relu"] e2[label="elementwise_add"] e3[label="elementwise_mul"] - + a -> e1 e1 -> b b[label="b"] e[label="b"] - + subgraph cluster_0 { label="in-placed" b -> e2 d -> e2 e2 -> e } - - + + e -> e3 f -> e3 -> g - + } diff --git a/docs/design/mkldnn/inplace/scripts/multi-output-inplace.dot b/docs/design/mkldnn/inplace/scripts/multi-output-inplace.dot index 9778f5f1872..0c2fdce1f0c 100644 --- a/docs/design/mkldnn/inplace/scripts/multi-output-inplace.dot +++ b/docs/design/mkldnn/inplace/scripts/multi-output-inplace.dot @@ -5,10 +5,10 @@ subgraph cluster_before { op1[label="elementwise_add"] op2[label="top_k\ninputs_vars{c}"] op3[label="top_k\ninputs_vars{c}"] - - + + c[label="c"] - + subgraph cluster_0 { style=solid label="to be in-placed" @@ -16,11 +16,11 @@ subgraph cluster_before { b-> op1 op1 -> c } - - + + c -> op2 - c -> op3 - + c -> op3 + op2 -> d op2 -> e op3 -> g @@ -32,8 +32,8 @@ subgraph cluster_after { op1b[label="elementwise_add"] op2b[label="top_k\ninput_vars{a}"] op3b[label="top_k\ninput_vars{a}"] - - + + cb[label="a"] ab[label="a"] bb[label="b"] @@ -41,7 +41,7 @@ subgraph cluster_after { eb[label="e"] gb[label="g"] hb[label="h"] - + subgraph cluster_0b { style=solid label="applied in-placed" @@ -49,11 +49,11 @@ subgraph cluster_after { bb-> op1b op1b -> cb } - - + + cb -> op2b - cb -> op3b - + cb -> op3b + op2b -> db op2b -> eb op3b -> gb diff --git a/docs/design/mkldnn/inplace/scripts/unwanted-inplace.dot b/docs/design/mkldnn/inplace/scripts/unwanted-inplace.dot index a772736c31f..42323f7d632 100644 --- a/docs/design/mkldnn/inplace/scripts/unwanted-inplace.dot +++ b/docs/design/mkldnn/inplace/scripts/unwanted-inplace.dot @@ -2,16 +2,16 @@ digraph G { e1[label="softmax\n"] e2[label="layer_norm\n"] - + c[label="b"] e[label="a"] - subgraph cluster_0 { + subgraph cluster_0 { label="in-placed" a -> e1 e1 -> c } - + c -> e2 e2 -> e - + } diff --git a/docs/design/mkldnn/nhwc/scripts/nhwc-grad.dot b/docs/design/mkldnn/nhwc/scripts/nhwc-grad.dot index cceb8f6ccd3..374b795bc0d 100644 --- a/docs/design/mkldnn/nhwc/scripts/nhwc-grad.dot +++ b/docs/design/mkldnn/nhwc/scripts/nhwc-grad.dot @@ -22,7 +22,7 @@ tensor_mean_grad[shape=rectangle,label="Tensor mean grad op\nlayout=kNHWC\nNHWC tensor_feed[shape=rectangle,label="Tensor feed_op\nlayout=kNCHW\nNHWC dim format"] tensor_feed2[shape=rectangle,label="Tensor feed_op\nlayout=kMKLDNN\nNCHW dim format"] -feed_op -> tensor_feed -> tensor_feed2 -> conv_mkldnn -> tensor_mkldnn -> pool_mkldnn -> tensor_mkldnn2 -> tensor_input_mean -> mean_op +feed_op -> tensor_feed -> tensor_feed2 -> conv_mkldnn -> tensor_mkldnn -> pool_mkldnn -> tensor_mkldnn2 -> tensor_input_mean -> mean_op mean_op -> tensor_output_mean @@ -30,7 +30,7 @@ tensor_input_mean -> tensor_mean_grad[xlabel="InferShape", style=dashed] tensor_output_mean -> mean_grad_op -mean_grad_op -> tensor_mean_grad -> tensor_mean_grad_mkldnn -> pool_grad_mkldnn -> tensor_pool_grad_mkldnn -> conv_grad_mkldnn -> tensor_conv_grad_mkldnn -> fetch_op +mean_grad_op -> tensor_mean_grad -> tensor_mean_grad_mkldnn -> pool_grad_mkldnn -> tensor_pool_grad_mkldnn -> conv_grad_mkldnn -> tensor_conv_grad_mkldnn -> fetch_op {rank="same" mean_op; mean_grad_op; tensor_output_mean} {rank="same" pool_mkldnn; pool_grad_mkldnn} diff --git a/docs/design/mkldnn/nhwc/scripts/nhwc.dot b/docs/design/mkldnn/nhwc/scripts/nhwc.dot index dee0df615a1..a4d6093bded 100644 --- a/docs/design/mkldnn/nhwc/scripts/nhwc.dot +++ b/docs/design/mkldnn/nhwc/scripts/nhwc.dot @@ -12,6 +12,6 @@ input_feed[shape=rectangle,label=layout=kNCHW
NH input_feed2[shape=rectangle,label="Tensor Input signal\nlayout=kMKLDNN\nNCHW dim format"] filter_feed[shape=rectangle,label="Tensor filter data\nlayout=kNCHW\nNCHW dim format"] filter_feed2[shape=rectangle,label="Tensor filter data\nlayout=kMKLDNN\nNCHW dim format"] -feed_op -> input_feed -> input_feed2 -> conv_mkldnn -> tensor_mkldnn -> pool_mkldnn -> tensor_mkldnn2 -> fetch_op -> tensor_fetch +feed_op -> input_feed -> input_feed2 -> conv_mkldnn -> tensor_mkldnn -> pool_mkldnn -> tensor_mkldnn2 -> fetch_op -> tensor_fetch feed_op2 -> filter_feed -> filter_feed2 -> conv_mkldnn } diff --git "a/docs/eval/\343\200\220Hackathon No.112\343\200\221 PR.md" "b/docs/eval/\343\200\220Hackathon No.112\343\200\221 PR.md" index bf4031931a2..53f6978d097 100644 --- "a/docs/eval/\343\200\220Hackathon No.112\343\200\221 PR.md" +++ "b/docs/eval/\343\200\220Hackathon No.112\343\200\221 PR.md" @@ -14,114 +14,114 @@ | ------------ | ------------------------------------------------- | ------------------------------ | -------------------- | | 类型 | 选项(选择一项) | 您的选择 | 使用时遇到的问题或建议 | | 任务模式 | 单机多卡 or 多机多卡 | 单机多卡 | - | -| 运行方式 | 物理机 or docker or AI开发平台 | docker | - | +| 运行方式 | 物理机 or docker or AI 开发平台 | docker | - | | 任务类型 | 集合通信(GPU)or 参数服务器(CPU/GPU) | 集合通信(GPU) | - | -| 分布式训练使用程度 | 成功实践过 or 熟悉基本过程 or 了解基本过程 or 不熟悉 | 成功实践过 | 在使用曙光平台docker创建镜像后难以使用多节点进行分布式训练,建议推出一种可以直接在E-Shell安装的方式,可以使得任务通过队列的方式提交 | +| 分布式训练使用程度 | 成功实践过 or 熟悉基本过程 or 了解基本过程 or 不熟悉 | 成功实践过 | 在使用曙光平台 docker 创建镜像后难以使用多节点进行分布式训练,建议推出一种可以直接在 E-Shell 安装的方式,可以使得任务通过队列的方式提交 | | 分布式策略使用程度 | 使用过调优策略 or 了解部分优化策略 or 没有使用过优化策略 | 了解部分优化策略 | 无 | | 分布式数据处理使用程度 | 熟悉 or 了解 or 不清楚 | 熟悉 | 无 | | 分布式模型保存和加载使用程度 | 熟悉 or 了解 or 不清楚 | 熟悉 | 无 | -| 遇到问题如何解决(可多选) | 日志排查 or 社区反馈 or 提Issue or 其他 | 日志排查、社区反馈 | 建议在官方文档中对常见问题进行汇总,便于使用者解决问题 | +| 遇到问题如何解决(可多选) | 日志排查 or 社区反馈 or 提 Issue or 其他 | 日志排查、社区反馈 | 建议在官方文档中对常见问题进行汇总,便于使用者解决问题 | # 一、摘要 -本评估方案将从以下几个方面对paddle分布式框架进行评估: -- 1、环境配置,对比pytorch环境以及paddle环境在曙光超算上的配置 - 对曙光超算如何使用paddle进行分布式计算进行了介绍 +本评估方案将从以下几个方面对 paddle 分布式框架进行评估: +- 1、环境配置,对比 pytorch 环境以及 paddle 环境在曙光超算上的配置 + 对曙光超算如何使用 paddle 进行分布式计算进行了介绍 -- 2、Fleet API的使用,对比Pytorch API与Fleet API的区别 +- 2、Fleet API 的使用,对比 Pytorch API 与 Fleet API 的区别 -- 3、分布式动态图的训练,使用pytorch和paddle完成在曙光超算上的分布式训练 - 对文档中代码进行的重写,导入了paddle.vision的部分包 +- 3、分布式动态图的训练,使用 pytorch 和 paddle 完成在曙光超算上的分布式训练 + 对文档中代码进行的重写,导入了 paddle.vision 的部分包 对鲜花数据集加载的代码进行了修改,改为: ```python train_dataset = paddle.vision.datasets.Flowers(mode='train', transform=transform) ``` ```python - optimizer.minimize(avg_loss)改为optimizer.step() -``` - 鲜花数据集的label索引是从1开始的,不是从0开始的,需要手工减1。 + optimizer.minimize(avg_loss)改为 optimizer.step() +``` + 鲜花数据集的 label 索引是从 1 开始的,不是从 0 开始的,需要手工减 1。 -- 4、文档质量评估,对paddle文档质量进行评估 - 文档中的代码有些旧,比如分布式训练快速开始中1.3动态图训练的代码许多API比较旧。 - 鲜花数据集的label索引是从1开始的,不是从0开始的,需要手工减1。在文档质量评估方面,认为文档对错误报告的解决方案不足。 +- 4、文档质量评估,对 paddle 文档质量进行评估 + 文档中的代码有些旧,比如分布式训练快速开始中 1.3 动态图训练的代码许多 API 比较旧。 + 鲜花数据集的 label 索引是从 1 开始的,不是从 0 开始的,需要手工减 1。在文档质量评估方面,认为文档对错误报告的解决方案不足。 - 5、错误汇总 - - + + # 二、环境配置 -## (1)在曙光超算昆山计算服务器部署pytorch分布式环境,给出部署步骤 -- 首先安装anaconda +## (1)在曙光超算昆山计算服务器部署 pytorch 分布式环境,给出部署步骤 +- 首先安装 anaconda ```python bash Anaconda3-2020.07-Linux-x86_64.sh –u -``` -- 创建并进入python3.6环境 +``` +- 创建并进入 python3.6 环境 ```python conda create -n pytorch-1.9 python=3.6 conda activate pytorch-1.9 -``` -- 安装pytorch-1.9(适配rocm-4.0.1及以上)PyTorch1.8和PyTorch1.9安装wheel包在公共目录: +``` +- 安装 pytorch-1.9(适配 rocm-4.0.1 及以上)PyTorch1.8 和 PyTorch1.9 安装 wheel 包在公共目录: ```python /public/software/apps/DeepLearning/whl/rocm-4.0.1/ -``` -- 安装pytorch_1.9-rocm_4.0.1(使用清华源) +``` +- 安装 pytorch_1.9-rocm_4.0.1(使用清华源) ```python pip install /public/software/apps/DeepLearning/whl/rocm-4.0.1/torch-1.9.0+rocm4.0.1-cp36-cp36m-linux_x86_64.whl -i https://pypi.tuna.tsinghua.edu.cn/simple/ -``` -- 对于torchverion的安装不能按照曙光官方给定的方法来,否则在torchversion在运行自定义算子时会出现错误,所以需要进行源码安装,安装方法如下: -```python -1、本地下载对应的torchvision分支源码包:https://github.com/pytorch/vision上传集群, -2、进入对应的conda环境,加载对应的rocm(这里rocm4.0.1)版本; -3、conda install libpng -y -4、conda install jpeg -y -5、pip3 install numpy pillow matplotlib ninja -i https://pypi.tuna.tsinghua.edu.cn/simple/ -6、使用salloc申请计算结点,使用ssh登录至计算节点,并进入对应的conda环境加载rocm(这里rocm4.0.1),执行编译:CC=clang CXX=clang++ python setup.py install -``` -## (2)在曙光超算昆山计算服务器部署paddle分布式环境,给出部署步骤 -若在曙光平台不能使用sudo指令,从而导致不能直接使用docker pull方式安装镜像,可以直接使用曙光内置的可视化容器方式安装: +``` +- 对于 torchverion 的安装不能按照曙光官方给定的方法来,否则在 torchversion 在运行自定义算子时会出现错误,所以需要进行源码安装,安装方法如下: +```python +1、本地下载对应的 torchvision 分支源码包:https://github.com/pytorch/vision 上传集群, +2、进入对应的 conda 环境,加载对应的 rocm(这里 rocm4.0.1)版本; +3、conda install libpng -y +4、conda install jpeg -y +5、pip3 install numpy pillow matplotlib ninja -i https://pypi.tuna.tsinghua.edu.cn/simple/ +6、使用 salloc 申请计算结点,使用 ssh 登录至计算节点,并进入对应的 conda 环境加载 rocm(这里 rocm4.0.1),执行编译:CC=clang CXX=clang++ python setup.py install +``` +## (2)在曙光超算昆山计算服务器部署 paddle 分布式环境,给出部署步骤 +若在曙光平台不能使用 sudo 指令,从而导致不能直接使用 docker pull 方式安装镜像,可以直接使用曙光内置的可视化容器方式安装: - 1、点击我的服务,计算智能服务 - 2、点击容器服务 - 3、点击容器管理添加镜像 - 4、镜像添加,源镜像名称填:paddlepaddle/paddle,源镜像标签填:latest-dev-rocm4.0-miopen2.11 - 5、添加完成后添加完成即可添加 - 6、使用镜像,对镜像进行推送,快速访问选择,是 -- 7、点击AI 服务,点击之前创建的容器 -- 8、点击启动容器,即可启动rocm-4.0.1环境 -- 9、打开容器后,该计算环境不能连接互联网,该环境可以调用此前终端设置的一切文件以及环境(包括conda环境),故需要提取打开命令行(E-Shell)创建一个conda环境安装paddle。 -- 10、曙光服务器conda环境安装方式参考曙光超算官方链接: +- 7、点击 AI 服务,点击之前创建的容器 +- 8、点击启动容器,即可启动 rocm-4.0.1 环境 +- 9、打开容器后,该计算环境不能连接互联网,该环境可以调用此前终端设置的一切文件以及环境(包括 conda 环境),故需要提取打开命令行(E-Shell)创建一个 conda 环境安装 paddle。 +- 10、曙光服务器 conda 环境安装方式参考曙光超算官方链接: ```python https://www.hpccube.com/doc/1.0.6/11250/general-handbook/compile/Anaconda.html -``` -- 11、激活conda环境并新建一个conda环境,进入该环境 +``` +- 11、激活 conda 环境并新建一个 conda 环境,进入该环境 ```python source activate conda create -n paddle python=3.7 conda activate paddle -``` -- 12、在曙光上使用paddle官网给出的安装方式会出现错误。 +``` +- 12、在曙光上使用 paddle 官网给出的安装方式会出现错误。 ```python python -m pip install paddlepaddle-rocm==2.2.2.rocm401.miopen211 -f https://www.paddlepaddle.org.cn/whl/rocm/stable.whl(此方法在曙光无法安装) -``` -- 13、故需要提前下载whl文件,下载链接: +``` +- 13、故需要提前下载 whl 文件,下载链接: ```python https://www.paddlepaddle.org.cn/whl/rocm/stable.whl -``` +``` - 14、paddlepaddle_rocm-2.2.2-cp37-cp37m-linux_x86_64.whl,版本经过测试可以安装。安装指令: ```python pip install paddlepaddle_rocm-2.2.2-cp37-cp37m-linux_x86_64.whl -i https://pypi.tuna.tsinghua.edu.cn/simple/ -``` -- 15、在安装完上述操作后还需要手动安装两个库opencv-python以及scipy +``` +- 15、在安装完上述操作后还需要手动安装两个库 opencv-python 以及 scipy ```python pip install scipy -i https://pypi.tuna.tsinghua.edu.cn/simple/ pip install opencv-python -i https://pypi.tuna.tsinghua.edu.cn/simple/ -``` +``` ## (3)对比两者的易用性与区别 -Pytorch的分布式环境在曙光平台安装时需要手动编译torchversion,这一点上pytorch比较繁琐。但是pytorch的环境在曙光平台比较稳定,而paddle环境在曙光平台经常不稳定,有时候能运行,有时候不能运行。 +Pytorch 的分布式环境在曙光平台安装时需要手动编译 torchversion,这一点上 pytorch 比较繁琐。但是 pytorch 的环境在曙光平台比较稳定,而 paddle 环境在曙光平台经常不稳定,有时候能运行,有时候不能运行。 ![image](https://user-images.githubusercontent.com/102226413/164142960-e956efce-a8fe-40ea-bfba-a83b8f8203c5.png) -上述问题是rocm版本问题,需要使用rocm-4.0.1版本。 修改rocm版本的方法为. module switch compiler/rocm/4.0.1 +上述问题是 rocm 版本问题,需要使用 rocm-4.0.1 版本。 修改 rocm 版本的方法为. module switch compiler/rocm/4.0.1 另外有一些问题没办法解决,我们使用的办法是重新开启镜像(多次开启后就会有可以使用的时候)无法解决的问题截图如下: ![image](https://user-images.githubusercontent.com/102226413/164143125-70d0e4ff-46d7-4461-8cb0-72c14e98b8e0.png) @@ -129,53 +129,53 @@ Pytorch的分布式环境在曙光平台安装时需要手动编译torchversion ![image](https://user-images.githubusercontent.com/102226413/164143166-cde2793b-eb06-43a3-92d1-bfa68c2f1558.png) -另外,我们在曙光上使用paddle的方法为开启镜像的方式,但是曙光平台对docker镜像的支持不太好,每次镜像保持的时间最多为72小时,而且每次关闭镜像后,无法重新开启原先镜像。为了方便使用,希望能够支持 任务提交方式运行的paddle分布式框架。而且任务提交的方式还方便管理多节点运行。 +另外,我们在曙光上使用 paddle 的方法为开启镜像的方式,但是曙光平台对 docker 镜像的支持不太好,每次镜像保持的时间最多为 72 小时,而且每次关闭镜像后,无法重新开启原先镜像。为了方便使用,希望能够支持 任务提交方式运行的 paddle 分布式框架。而且任务提交的方式还方便管理多节点运行。 -# 三、Fleet API的使用 -## (1)分析pytorch分布式框架DDP某些API的使用 +# 三、Fleet API 的使用 +## (1)分析 pytorch 分布式框架 DDP 某些 API 的使用 - 导入必要的分布式训练依赖包 ```python import torch.distributed as dist -``` -- 初始化DDP分布式环境,需要指定backend, init_method,world_size, rank四个参数 +``` +- 初始化 DDP 分布式环境,需要指定 backend, init_method,world_size, rank 四个参数 ```python dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, world_size=args.world_size, rank=args.rank) -``` +``` - 优化器,不需要进行分布式函数的包装 -- 通过DistributedDataParallel获取分布式model,用于支持分布式训练 +- 通过 DistributedDataParallel 获取分布式 model,用于支持分布式训练 ```python model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank]) -``` +``` -## (2)按照文档内容使用Fleet API +## (2)按照文档内容使用 Fleet API 文档链接如下:https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/06_distributed_training/cluster_quick_start_cn.html -本次实验测试的Fleet API如下: +本次实验测试的 Fleet API 如下: - 导入必要的分布式训练依赖包 ```python from paddle.distributed import fleet -``` -- 初始化Fleet环境 +``` +- 初始化 Fleet 环境 ```python fleet.init(is_collective=True) -``` +``` - 分布式优化器 ```python optimizer = fleet.distributed_optimizer(optimizer) -``` -- 通过Fleet API获取分布式model,用于支持分布式训练 +``` +- 通过 Fleet API 获取分布式 model,用于支持分布式训练 ```python resnet = fleet.distributed_model(resnet) -``` +``` ## (3)比较 -pytorch和paddle的分布式代码基本相似,pytorch初始化DDP分布式环境,需要指定backend, init_method,world_size, rank四个参数,比较麻烦。 +pytorch 和 paddle 的分布式代码基本相似,pytorch 初始化 DDP 分布式环境,需要指定 backend, init_method,world_size, rank 四个参数,比较麻烦。 -# 四、分布式动态图训练 +# 四、分布式动态图训练 -## (1)使用pytorch完成一个图像分类的动态图分布式例子 +## (1)使用 pytorch 完成一个图像分类的动态图分布式例子 -DDP分布式代码,测试flower数据集: +DDP 分布式代码,测试 flower 数据集: ```python import torch.nn as nn import torch.utils.data as D @@ -248,7 +248,7 @@ def get_world_size(): def reduce_value(value, average=True): world_size = get_world_size() - if world_size < 2: # 单GPU的情况 + if world_size < 2: # 单 GPU 的情况 return value with torch.no_grad(): @@ -257,7 +257,7 @@ def reduce_value(value, average=True): value /= world_size return value - + def main(): # 初始化 rank = int(os.environ["RANK"]) @@ -270,11 +270,11 @@ def main(): # 数据增强 data_transforms = { 'train': transforms.Compose([ - transforms.RandomRotation(45), # 随机旋转,-45到45度之间随机 + transforms.RandomRotation(45), # 随机旋转,-45 到 45 度之间随机 transforms.CenterCrop(224), # 从中心开始裁剪 transforms.RandomHorizontalFlip(p=0.5), # 随机水平翻转 选择一个概率 transforms.RandomVerticalFlip(p=0.5), # 随机垂直翻转 - transforms.ColorJitter(brightness=0.2, contrast=0.1, saturation=0.1, hue=0.1), # 参数1为亮度,参数2为对比度,参数3为饱和度,参数4为色相 + transforms.ColorJitter(brightness=0.2, contrast=0.1, saturation=0.1, hue=0.1), # 参数 1 为亮度,参数 2 为对比度,参数 3 为饱和度,参数 4 为色相 transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) # 均值,标准差 @@ -301,7 +301,7 @@ def main(): train_loader = torch.utils.data.DataLoader(train_dataset, sampler=train_sampler, batch_size=batch_size) # 模型构建 model = torchvision.models.resnet50(pretrained=False, num_classes=102).to(device) - # 构建DDP模型 + # 构建 DDP 模型 model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[gpu]) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(filter(lambda p: p.requires_grad, model.parameters()), lr=0.0001, momentum=0.9) @@ -336,7 +336,7 @@ def main(): if int(rank) == 0: print('{} Loss: {:.4f} Acc: {:.4f}'.format( 'train', epoch_loss, epoch_acc)) - + if int(rank) == 0: print('Training complete Best val Acc: {:4f}'.format(best_acc)) @@ -349,25 +349,25 @@ if __name__ == "__main__": python -m torch.distributed.launch --nproc_per_node=4 train.py ``` -## (2)使用paddle完成一个图像分类的分布式例子 +## (2)使用 paddle 完成一个图像分类的分布式例子 由于文档中提供的代码出现了较多问题,我们对代码进行了重新的编写。修改的部分有: -- 1、导入了paddle.vision的部分包, +- 1、导入了 paddle.vision 的部分包, - 2、对鲜花数据集加载的代码进行了修改,改为: - train_dataset = paddle.vision.datasets.Flowers(mode='train', transform=transform) 官网的代码不能再曙光平台下载鲜花数据集,所以我们需要提取下载离线鲜花数据集报错如下: ![image](https://user-images.githubusercontent.com/102226413/164143513-24236f90-975d-47f1-a1db-7e18e6c94c9c.png) -且数据集的保存地址为一个缓存空间,用户在使用的时候可能找不到数据集,如/public/home/username/.cache/paddle/dataset目录。 -而pytorch的加载数据集API会吧数据集加载到当前目录,方便了使用者。 +且数据集的保存地址为一个缓存空间,用户在使用的时候可能找不到数据集,如/public/home/username/.cache/paddle/dataset 目录。 +而 pytorch 的加载数据集 API 会吧数据集加载到当前目录,方便了使用者。 ![image](https://user-images.githubusercontent.com/102226413/164144065-2fea8ac3-dcf5-48ac-a4c7-05cace99c611.png) -- 3、optimizer.minimize(avg_loss)改为optimizer.step() -- 4、鲜花数据集的label索引是从1开始的,不是从0开始的,需要手工减1。 +- 3、optimizer.minimize(avg_loss)改为 optimizer.step() +- 4、鲜花数据集的 label 索引是从 1 开始的,不是从 0 开始的,需要手工减 1。 -train_fleet_dygraph.py重构后的代码: +train_fleet_dygraph.py 重构后的代码: ```python # -*- coding: UTF-8 -*- import numpy as np @@ -413,7 +413,7 @@ def optimizer_setting(parameter_list=None): # 设置训练函数 def train_resnet(): - # 初始化Fleet环境 + # 初始化 Fleet 环境 fleet.init(is_collective=True) # resnet = resnet34(class_dim=class_dim, layers=50) @@ -422,9 +422,9 @@ def train_resnet(): optimizer = optimizer_setting(parameter_list=resnet.parameters()) # 分布式优化器 optimizer = fleet.distributed_optimizer(optimizer) - # 通过Fleet API获取分布式model,用于支持分布式训练 + # 通过 Fleet API 获取分布式 model,用于支持分布式训练 resnet = fleet.distributed_model(resnet) - # 构建分布式数据集 归一化 / 255 并且转成HWC --> CHW格式 + # 构建分布式数据集 归一化 / 255 并且转成 HWC --> CHW 格式 # transform = ToTensor() transform = Compose([ Resize(size=(224, 224)), @@ -476,16 +476,16 @@ def train_resnet(): # 启动训练 if __name__ == '__main__': train_resnet() -``` +``` 运行方式: ```python python3 -m paddle.distributed.launch --gpus=0,1,2,3 train_fleet_dygraph.py -``` +``` ## (3)两个程序的运行结果 -- DDP程序运行结果 +- DDP 程序运行结果 ```python Epoch 7/100 loss: tensor(4.3571, device='cuda:0', grad_fn=) @@ -592,10 +592,10 @@ train Loss: 4.3692 Acc: 0.0447 ``` -- paddlepaddle运行结果,附了第7、8个epoch +- paddlepaddle 运行结果,附了第 7、8 个 epoch ```python launch train in GPU mode! -INFO 2022-04-02 19:30:53,752 launch_utils.py:510] Local start 4 processes. First process distributed environment info (Only For Debug): +INFO 2022-04-02 19:30:53,752 launch_utils.py:510] Local start 4 processes. First process distributed environment info (Only For Debug): +=======================================================================================+ | Distributed Envs Value | +---------------------------------------------------------------------------------------+ @@ -647,56 +647,56 @@ Epoch 7, batch 0] loss: 4.30898, acc1: 0.00000, acc5: 0.12500 [Epoch 8, batch 40] loss: 4.38518, acc1: 0.06250, acc5: 0.25000 [Epoch 8, batch 45] loss: 4.08105, acc1: 0.12500, acc5: 0.25000 [Epoch 8, batch 50] loss: 4.33881, acc1: 0.12500, acc5: 0.18750 -``` +``` # 五、文档质量 -感觉文档中部分代码的版本较老,比如1.3动态图完整代码中: +感觉文档中部分代码的版本较老,比如 1.3 动态图完整代码中: ```python -from resnet_dygraph import ResNet -``` -上述操作可以使用paddle内置API进行替换 +from resnet_dygraph import ResNet +``` +上述操作可以使用 paddle 内置 API 进行替换 ```python from paddle.vision.models import ResNet -``` -flower数据集在曙光平台不能通过API直接加载得到,需要手动下载。加载鲜花数据集的API也需要进行更新。可以更新为 +``` +flower 数据集在曙光平台不能通过 API 直接加载得到,需要手动下载。加载鲜花数据集的 API 也需要进行更新。可以更新为 ```python train_dataset = paddle.vision.datasets.Flowers(mode='train', transform=transform) -``` +``` 对于很多错误,在文档中没有一个很好的提示。可以对常见报错进行一个汇总。 # 六、报错查错(问题汇总) -1、paddle在曙光超算上运行出现libamdhip64.4,,需要指定使用rocm4.0.1环境运行,曙光上rocm-2.9环境不能运行。 +1、paddle 在曙光超算上运行出现 libamdhip64.4,,需要指定使用 rocm4.0.1 环境运行,曙光上 rocm-2.9 环境不能运行。 ```python -module rm compiler/rocm/2.9 +module rm compiler/rocm/2.9 module load compiler/rocm/4.0.1 -``` +``` ![image](https://user-images.githubusercontent.com/102226413/164144235-ce808c51-1712-4417-b9bf-99da6362b3f0.png) -2、无法下载flower数据集,需要手动加载数据集 -按照文档旧API无法在曙光平台以及移动九天平台加载数据集,需要手动下载数据集。 -且数据集的保存地址为一个缓存空间,用户在使用的时候可能找不到数据集,如/public/home/username/.cache/paddle/dataset目录。 -而pytorch的加载数据集API会吧数据集加载到当前目录,方便了使用者。 +2、无法下载 flower 数据集,需要手动加载数据集 +按照文档旧 API 无法在曙光平台以及移动九天平台加载数据集,需要手动下载数据集。 +且数据集的保存地址为一个缓存空间,用户在使用的时候可能找不到数据集,如/public/home/username/.cache/paddle/dataset 目录。 +而 pytorch 的加载数据集 API 会吧数据集加载到当前目录,方便了使用者。 ![image](https://user-images.githubusercontent.com/102226413/164143513-24236f90-975d-47f1-a1db-7e18e6c94c9c.png) ![image](https://user-images.githubusercontent.com/102226413/164144065-2fea8ac3-dcf5-48ac-a4c7-05cace99c611.png) -3、安装完paddle后运行该程序会缺少常用两个库:opencv-python以及scipy。 +3、安装完 paddle 后运行该程序会缺少常用两个库:opencv-python 以及 scipy。 安装方式: ```python pip install scipy -i https://pypi.tuna.tsinghua.edu.cn/simple/ pip install opencv-python -i https://pypi.tuna.tsinghua.edu.cn/simple/ -``` -4、鲜花数据集的label索引是从1开始的,不是从0开始的,需要手工减1。 +``` +4、鲜花数据集的 label 索引是从 1 开始的,不是从 0 开始的,需要手工减 1。 ```python img, label = data label = label - 1 ``` -5、未解决问题(无法在曙光上使用paddle 的问题) +5、未解决问题(无法在曙光上使用 paddle 的问题) ![image](https://user-images.githubusercontent.com/102226413/164143125-70d0e4ff-46d7-4461-8cb0-72c14e98b8e0.png) ![image](https://user-images.githubusercontent.com/102226413/164143166-cde2793b-eb06-43a3-92d1-bfa68c2f1558.png) diff --git a/docs/guides/beginner/quick_start_cn.ipynb b/docs/guides/beginner/quick_start_cn.ipynb index 3b06e5fcc3b..260b7c08cb8 100644 --- a/docs/guides/beginner/quick_start_cn.ipynb +++ b/docs/guides/beginner/quick_start_cn.ipynb @@ -670,4 +670,4 @@ }, "nbformat": 4, "nbformat_minor": 5 -} \ No newline at end of file +} diff --git a/docs/images/op.dot b/docs/images/op.dot index c8ad839cb88..d9c89225451 100644 --- a/docs/images/op.dot +++ b/docs/images/op.dot @@ -1,4 +1,4 @@ -digraph sample { +digraph sample { graph [rankdir=TD]; node [shape=record]; - op [label="{Operator| InferShape()=0\lRun()=0\l | map<string, string[]> inputs_\lmap<string, string[]> outputs_ \l AttributeMap attrs_\l}"]; -} \ No newline at end of file + op [label="{Operator| InferShape()=0\lRun()=0\l | map<string, string[]> inputs_\lmap<string, string[]> outputs_ \l AttributeMap attrs_\l}"]; +} diff --git a/docs/images/op_op_with_kern_class_diagram.dot b/docs/images/op_op_with_kern_class_diagram.dot index 8f24e9ea83a..22ab89f1cb2 100644 --- a/docs/images/op_op_with_kern_class_diagram.dot +++ b/docs/images/op_op_with_kern_class_diagram.dot @@ -1,6 +1,6 @@ -digraph sample { +digraph sample { graph [rankdir=TD]; node [shape=record]; - op [label="{Operator| InferShape()=0\lRun()=0\l | map<string, string[]> inputs_\lmap<string, string[]> outputs_ \l AttributeMap attrs_\l}"]; + op [label="{Operator| InferShape()=0\lRun()=0\l | map<string, string[]> inputs_\lmap<string, string[]> outputs_ \l AttributeMap attrs_\l}"]; op_with_kern [label="{OpWithKernel | InferShape()=0\lRun()\l | map<OpKernelKey,OpKernel>kernels_ }"] op_kernel [label="{OpKernel | Compute()=0}"] op_kernel_key [label="{OpKernelKey| Place place\n...}"] @@ -29,10 +29,10 @@ digraph sample { mul_kernel [label="template <typename Place>\lclass MulOpKernel\l"] op_kernel -> mul_kernel [dir=back, arrowtail=onormal] mul_op -> mul_kernel [arrowhead=vee, label="register many"] - + { rank=same; mul_op; mul_kernel; } -} \ No newline at end of file +} diff --git a/docs/images/op_with_kernel.dot b/docs/images/op_with_kernel.dot index 4f5af4f7b5f..1ceca2e401c 100644 --- a/docs/images/op_with_kernel.dot +++ b/docs/images/op_with_kernel.dot @@ -1,6 +1,6 @@ -digraph sample { +digraph sample { graph [rankdir=TD]; node [shape=record]; - op [label="{Operator}"]; + op [label="{Operator}"]; op_with_kern [label="{OpWithKernel | InferShape()=0\lRun()\l | map<OpKernelKey,OpKernel>kernels_ }"] op_kernel [label="{OpKernel | Compute()=0}"] op_kernel_key [label="{OpKernelKey| Place place\n...}"] @@ -23,4 +23,4 @@ digraph sample { } op_with_kern -> op_kernel_key [arrowhead=vee, label ="\nas map key"] -} \ No newline at end of file +} diff --git a/docs/images/rnn.dot b/docs/images/rnn.dot index c1141cd9c98..92440d347c7 100644 --- a/docs/images/rnn.dot +++ b/docs/images/rnn.dot @@ -1,5 +1,5 @@ digraph G { - label = "simple RNN implementation" + label = "simple RNN implementation" ranksep=2; diff --git a/docs/practices/cv/image_ocr.ipynb b/docs/practices/cv/image_ocr.ipynb index 5ae9b6242f8..13cafec5f95 100755 --- a/docs/practices/cv/image_ocr.ipynb +++ b/docs/practices/cv/image_ocr.ipynb @@ -650,4 +650,4 @@ }, "nbformat": 4, "nbformat_minor": 1 -} \ No newline at end of file +} diff --git a/docs/practices/quick_start/hello_paddle.ipynb b/docs/practices/quick_start/hello_paddle.ipynb index 64a2e9e2f08..d6cf10f55b9 100755 --- a/docs/practices/quick_start/hello_paddle.ipynb +++ b/docs/practices/quick_start/hello_paddle.ipynb @@ -381,4 +381,4 @@ }, "nbformat": 4, "nbformat_minor": 1 -} \ No newline at end of file +} diff --git a/docs/practices/recommendations/collaborative_filtering.ipynb b/docs/practices/recommendations/collaborative_filtering.ipynb index 32046d636cd..89a874294b1 100755 --- a/docs/practices/recommendations/collaborative_filtering.ipynb +++ b/docs/practices/recommendations/collaborative_filtering.ipynb @@ -598,4 +598,4 @@ }, "nbformat": 4, "nbformat_minor": 1 -} \ No newline at end of file +} diff --git a/docs/practices/reinforcement_learning/AlphaZero.ipynb b/docs/practices/reinforcement_learning/AlphaZero.ipynb index e7099dea991..d74a19f318e 100644 --- a/docs/practices/reinforcement_learning/AlphaZero.ipynb +++ b/docs/practices/reinforcement_learning/AlphaZero.ipynb @@ -1094,4 +1094,4 @@ }, "nbformat": 4, "nbformat_minor": 4 -} \ No newline at end of file +} diff --git a/docs/practices/time_series/autoencoder.ipynb b/docs/practices/time_series/autoencoder.ipynb index 5c0b8feb477..40affe4f553 100755 --- a/docs/practices/time_series/autoencoder.ipynb +++ b/docs/practices/time_series/autoencoder.ipynb @@ -1044,4 +1044,4 @@ }, "nbformat": 4, "nbformat_minor": 1 -} \ No newline at end of file +} diff --git a/docs/templates/common_docs.py b/docs/templates/common_docs.py index 28dbe4762da..6d0941a831d 100644 --- a/docs/templates/common_docs.py +++ b/docs/templates/common_docs.py @@ -12,7 +12,7 @@ bias_attr (ParamAttr, optional): The parameter attribute for learnable bias(Bias) of this layer. For more information, please refer to :ref:`api_fluid_ParamAttr`. label (Tensor): The label value corresponding to input, it's data type should be int32, int64. learning_rate (Tensor|float): The learning rate, can be a Tensor or a float value. Default is 1e-03. - axis (int, optional): The axis along which to operate. Default is 0. + axis (int, optional): The axis along which to operate. Default is 0. epsilon (float, optional): Small float added to denominator to avoid dividing by zero. Default is 1e-05. is_test (bool, optional): A flag indicating whether execution is in test phase. Default is False, means not in test phase. shape (Tensor|tuple|list): Shape of the Tensor. If shape is a list or tuple, the elements of it should be integers or Tensors with shape [1]. If shape is Tensor, it should be an 1-D Tensor . @@ -20,7 +20,7 @@ filter_size (tuple|list|int): The size of convolving kernel. It can be a single integer or a tuple/list containing two integers, representing the height and width of the convolution window respectively. If it is a single integer, the height and width are equal to the integer. padding (tuple|int): The padding size. It can be a single integer or a tuple containing two integers, representing the size of padding added to the height and width of the input. If it is a single integer, the both sides of padding are equal to the integer. Default is 0. include_sublayers (bool, optional): Whether include the sublayers. If True, return list includes the sublayers weights. Default is True. - stride (tuple|int): The stride size. It can be a single integer or a tuple containing two integers, representing the strides of the convolution along the height and width. If it is a single integer, the height and width are equal to the integer. Default is 1. + stride (tuple|int): The stride size. It can be a single integer or a tuple containing two integers, representing the strides of the convolution along the height and width. If it is a single integer, the height and width are equal to the integer. Default is 1. groups (int, optional): The group number of convolution layer. When group=n, the input and convolution kernels are divided into n groups equally, the first group of convolution kernels and the first group of inputs are subjected to convolution calculation, the second group of convolution kernels and the second group of inputs are subjected to convolution calculation, ……, the nth group of convolution kernels and the nth group of inputs perform convolution calculations. Default is 1. regularization (WeightDecayRegularizer, optional): The strategy of regularization. There are two method: :ref:`api_fluid_regularizer_L1Decay` 、 :ref:`api_fluid_regularizer_L2Decay` . If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already, the regularization setting here in optimizer will be ignored for this parameter. Otherwise, the regularization setting here in optimizer will take effect. Default None, meaning there is no regularization. grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of some derived class of ``GradientClipBase`` . There are three cliping strategies ( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` , :ref:`api_fluid_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping. @@ -33,7 +33,7 @@ dim (int, optional): A dimension along which to operate. Default is 0. is_sparse (bool, optional): Whether use sparse updating. For more information, please refer to :ref:`api_guide_sparse_update_en` . If it's True, it will ues sparse updating. place (paddle.CPUPlace()|paddle.CUDAPlace(N)|None): This parameter represents which device the executor runs on, and N means the GPU's id. When this parameter is None, PaddlePaddle will set the default device according to its installation version. If Paddle is CPU version, the default device would be set to CPUPlace(). If Paddle is GPU version, the default device would be set to CUDAPlace(0). Default is None. - num_filters (int): the number of convolution kernels, is also the number of output channels. + num_filters (int): the number of convolution kernels, is also the number of output channels. """ common_args_cn = """ diff --git a/docs/templates/layout.html b/docs/templates/layout.html index f65da40df43..837e05fb73e 100644 --- a/docs/templates/layout.html +++ b/docs/templates/layout.html @@ -10,13 +10,13 @@ {% endblock %} -{%- block extrahead %} +{%- block extrahead %}