Skip to content

Commit bbedf0b

Browse files
authored
fix E-List (#7608)
1 parent 5229249 commit bbedf0b

File tree

18 files changed

+24
-36
lines changed

18 files changed

+24
-36
lines changed

_typos.toml

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,6 @@ Nervana = "Nervana"
2626

2727
# These words need to be fixed
2828
Creenshot = "Creenshot"
29-
Embeddding = "Embeddding"
30-
Embeding = "Embeding"
31-
Engish = "Engish"
3229
Learing = "Learing"
3330
Moible = "Moible"
3431
Operaton = "Operaton"
@@ -57,15 +54,6 @@ dimention = "dimention"
5754
dimentions = "dimentions"
5855
dirrectories = "dirrectories"
5956
disucssion = "disucssion"
60-
egde = "egde"
61-
enviornment = "enviornment"
62-
erros = "erros"
63-
evalute = "evalute"
64-
exampels = "exampels"
65-
exection = "exection"
66-
exlusive = "exlusive"
67-
exmaple = "exmaple"
68-
exsits = "exsits"
6957
feeded = "feeded"
7058
flaot = "flaot"
7159
fliters = "fliters"

ci_scripts/check_api_docs_en.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,6 +124,6 @@ def check_system_message_in_doc(doc_file):
124124
if error_files:
125125
print("error files: ", error_files)
126126
print(
127-
"ERROR: these docs exsits System Message: WARNING/ERROR, please check and fix them"
127+
"ERROR: these docs exists System Message: WARNING/ERROR, please check and fix them"
128128
)
129129
sys.exit(1)

ci_scripts/check_api_docs_en.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ function check_system_message(){
1313
fi
1414
}
1515

16-
echo "RUN Engish API Docs Checks"
16+
echo "RUN English API Docs Checks"
1717
jsonfn=$1
1818
output_path=$2
1919
need_check_api_py_files="${3}"

docs/design/dist_train/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ The training process of asynchronous training can be:
4848
2. Trainer gets all parameters back from pserver.
4949

5050
### Note:
51-
There are also some conditions that need to consider. For exmaple:
51+
There are also some conditions that need to consider. For example:
5252

5353
1. If trainer needs to wait for the pserver to apply it's gradient and then get back the parameters back.
5454
1. If we need a lock between parameter update and parameter fetch.

docs/design/memory/memory_optimization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ We can leran these techniques from compilers. There are mainly two stages to mak
6060

6161

6262
#### Control Flow Graph
63-
To perform analysis on a program, it is often useful to make a control flow graph. A [control flow graph](https://en.wikipedia.org/wiki/Control_flow_graph) (CFG) in computer science is a representation, using graph notation, of all paths that might be traversed through a program during its execution. Each statement in the program is a node in the flow graph; if statemment x can be followed by statement y, there is an egde from x to y.
63+
To perform analysis on a program, it is often useful to make a control flow graph. A [control flow graph](https://en.wikipedia.org/wiki/Control_flow_graph) (CFG) in computer science is a representation, using graph notation, of all paths that might be traversed through a program during its execution. Each statement in the program is a node in the flow graph; if statemment x can be followed by statement y, there is an edge from x to y.
6464

6565
Following is the flow graph for a simple loop.
6666

docs/design/mkldnn/inplace/inplace.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,4 +94,4 @@ replace this original name in all of next op instances.
9494

9595
\* oneDNN gelu kernel is able to perform in-place execution, but currently gelu op does not support in-place execution.
9696

97-
\*\* sum kernel is using oneDNN sum primitive that does not provide in-place exection, so in-place computation is done faked through external buffer. So it was not added into oneDNN inplace pass.
97+
\*\* sum kernel is using oneDNN sum primitive that does not provide in-place execution, so in-place computation is done faked through external buffer. So it was not added into oneDNN inplace pass.

docs/design/phi/kernel_migrate_cn.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ void LogSoftmaxKernel(const Context& dev_ctx,
159159
| `auto* ptr = out->mutbale_data()` | `auto* ptr = out->data()` |
160160
| `out->mutbale_data(dims, place)` | `out->Resize(dims); dev_ctx.template Alloc(out)` |
161161
| `out->mutbale_data(place, dtype)` | `dev_ctx.Alloc(out, dtype)` |
162-
| `platform::erros::XXX` | `phi::erros::XXX` |
162+
| `platform::errors::XXX` | `phi::errors::XXX` |
163163
| `platform::float16/bfloat16/complex64/complex128` | `dtype::float16/bfloat16/complex64/complex128` |
164164
| `framework::Eigen***` | `Eigen***` |
165165
| `platform::XXXPlace` | `phi::XXXPlace` |

docs/design/phi/kernel_migrate_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ Secondly, it is necessary to replace some of the types or functions that were on
159159
| `auto* ptr = out->mutbale_data()` | `auto* ptr = out->data()` |
160160
| `out->mutbale_data(dims, place)` | `out->Resize(dims); dev_ctx.template Alloc(out)` |
161161
| `out->mutbale_data(place, dtype)` | `dev_ctx.Alloc(out, dtype)` |
162-
| `platform::erros::XXX` | `phi::erros::XXX` |
162+
| `platform::errors::XXX` | `phi::errors::XXX` |
163163
| `platform::float16/bfloat16/complex64/complex128` | `dtype::float16/bfloat16/complex64/complex128` |
164164
| `framework::Eigen***` | `Eigen***` |
165165
| `platform::XXXPlace` | `phi::XXXPlace` |

docs/dev_guides/custom_device_docs/custom_device_example_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ In this section we will walk through the steps required to extend a fake hardwar
1010

1111
**InitPlugin**
1212

13-
As a custom runtime entry function, InitPlugin is required to be implemented by the plug-in. The parameter in InitPlugin should also be checked, device information should be filled in, and the runtime API should be registered. In the initialization, PaddlePaddle loads the plug-in and invokes InitPlugin to initialize it, and register runtime (The whole process can be done automatically by the framework, only if the dynamic-link library is in site-packages/paddle-plugins/ or the designated directory of the enviornment variable of CUSTOM_DEVICE_ROOT).
13+
As a custom runtime entry function, InitPlugin is required to be implemented by the plug-in. The parameter in InitPlugin should also be checked, device information should be filled in, and the runtime API should be registered. In the initialization, PaddlePaddle loads the plug-in and invokes InitPlugin to initialize it, and register runtime (The whole process can be done automatically by the framework, only if the dynamic-link library is in site-packages/paddle-plugins/ or the designated directory of the environment variable of CUSTOM_DEVICE_ROOT).
1414

1515
Example:
1616

docs/guides/06_distributed_training/model_parallel_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@
2929

3030
对于 Embedding 操作,可以将其理解为一种查找表操作。即,将输入看做索引,将 Embedding 参数看做查找表,根据该索引查表得到相应的输出,如下图(a)所示。当采用模型并行时,Embedding 的参数被均匀切分到多个卡上。假设 Embedding 参数的维度为 N*D,并采用 K 张卡执行模型并行,那么模型并行模式下每张卡上的 Embedding 参数的维度为 N//K*D。当参数的维度 N 不能被卡数 K 整除时,最后一张卡的参数维度值为(N//K+N%K)*D。以下图(b)为例,Embedding 参数的维度为 8*D,采用 2 张卡执行模型并行,那么每张卡上 Embedding 参数的维度为 4*D。
3131

32-
为了便于说明,以下我们均假设 Embedding 的参数维度值 D 可以被模型并行的卡数 D 整除。此时,每张卡上 Embedding 参数的索引值为[0, N/K),逻辑索引值为[k*N/K, (k+1)*N/K),其中 k 表示卡序号,0<=k<K。对于输入索引 I,如果该索引在该卡表示的逻辑索引范围内,则返回该索引所表示的表项(索引值为 I-k*N/K;否则,返回值为全 0 的虚拟表项。随后,通过 AllReduce 操作获取所有输出表项的和,即对应该 Embeding 操作的输出;整个查表过程如下图(b)所示。
32+
为了便于说明,以下我们均假设 Embedding 的参数维度值 D 可以被模型并行的卡数 D 整除。此时,每张卡上 Embedding 参数的索引值为[0, N/K),逻辑索引值为[k*N/K, (k+1)*N/K),其中 k 表示卡序号,0<=k<K。对于输入索引 I,如果该索引在该卡表示的逻辑索引范围内,则返回该索引所表示的表项(索引值为 I-k*N/K;否则,返回值为全 0 的虚拟表项。随后,通过 AllReduce 操作获取所有输出表项的和,即对应该 Embedding 操作的输出;整个查表过程如下图(b)所示。
3333

3434
.. image:: ./images/parallel_embedding.png
3535
:width: 600

0 commit comments

Comments
 (0)