Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 1 addition & 12 deletions _typos.toml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ arange = "arange"
unsupport = "unsupport"
Nervana = "Nervana"
datas = "datas"
feeded = "feeded"

# These words need to be fixed
Learing = "Learing"
Expand All @@ -39,18 +40,6 @@ dimention = "dimention"
dimentions = "dimentions"
dirrectories = "dirrectories"
disucssion = "disucssion"
feeded = "feeded"
flaot = "flaot"
fliters = "fliters"
follwing = "follwing"
formated = "formated"
formater = "formater"
forword = "forword"
foward = "foward"
functinal = "functinal"
fundemental = "fundemental"
funtion = "funtion"
ilter = "ilter"
inferface = "inferface"
infor = "infor"
instert = "instert"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

ResNetBasicBlock
-------------------------------
.. py:class:: paddle.incubate.xpu.resnet_block.ResNetBasicBlock(num_channels1, num_filter1, filter1_size, num_channels2, num_filter2, filter2_size, num_channels3, num_filter3, filter3_size, stride1=1, stride2=1, stride3=1, act='relu', momentum=0.9, eps=1e-5, data_format='NCHW', has_shortcut=False, use_global_stats=False, is_test=False, filter1_attr=None, scale1_attr=None, bias1_attr=None, moving_mean1_name=None, moving_var1_name=None, filter2_attr=None, scale2_attr=None, bias2_attr=None, moving_mean2_name=None, moving_var2_name=None, ilter3_attr=None, scale3_attr=None, bias3_attr=None, moving_mean3_name=None, moving_var3_name=None, padding1=0, padding2=0, padding3=0, dilation1=1, dilation2=1, dilation3=1, trainable_statistics=False, find_conv_max=True)
.. py:class:: paddle.incubate.xpu.resnet_block.ResNetBasicBlock(num_channels1, num_filter1, filter1_size, num_channels2, num_filter2, filter2_size, num_channels3, num_filter3, filter3_size, stride1=1, stride2=1, stride3=1, act='relu', momentum=0.9, eps=1e-5, data_format='NCHW', has_shortcut=False, use_global_stats=False, is_test=False, filter1_attr=None, scale1_attr=None, bias1_attr=None, moving_mean1_name=None, moving_var1_name=None, filter2_attr=None, scale2_attr=None, bias2_attr=None, moving_mean2_name=None, moving_var2_name=None, filter3_attr=None, scale3_attr=None, bias3_attr=None, moving_mean3_name=None, moving_var3_name=None, padding1=0, padding2=0, padding3=0, dilation1=1, dilation2=1, dilation3=1, trainable_statistics=False, find_conv_max=True)
该接口用于构建 ``ResNetBasicBlock`` 类的一个可调用对象,实现一次性计算多个 ``Conv2D``、 ``BatchNorm`` 和 ``ReLU`` 的功能,排列顺序参见源码链接。

Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/nn/GRU_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ GRU
- **input_size** (int) - 输入 :math:`x` 的大小。
- **hidden_size** (int) - 隐藏状态 :math:`h` 大小。
- **num_layers** (int,可选) - 循环网络的层数。例如,将层数设为 2,会将两层 GRU 网络堆叠在一起,第二层的输入来自第一层的输出。默认为 1。
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。foward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。forward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
- **time_major** (bool,可选) - 指定 input 的第一个维度是否是 time steps。如果 time_major 为 True,则 Tensor 的形状为[time_steps,batch_size,input_size],否则为[batch_size,time_steps,input_size]。`time_steps` 指输入序列的长度。默认为 False。
- **dropout** (float,可选) - dropout 概率,指的是出第一层外每层输入时的 dropout 概率。范围为[0, 1]。默认为 0。
- **weight_ih_attr** (ParamAttr,可选) - weight_ih 的参数。默认为 None。
Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/nn/LSTM_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ LSTM
- **input_size** (int) - 输入 :math:`x` 的大小。
- **hidden_size** (int) - 隐藏状态 :math:`h` 大小。
- **num_layers** (int,可选) - 循环网络的层数。例如,将层数设为 2,会将两层 GRU 网络堆叠在一起,第二层的输入来自第一层的输出。默认为 1。
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。foward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。forward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
- **time_major** (bool,可选) - 指定 input 的第一个维度是否是 time steps。如果 time_major 为 True,则 Tensor 的形状为[time_steps, batch_size, input_size],否则为[batch_size, time_steps, input_size]。`time_steps` 指输入序列的长度。默认为 False。
- **dropout** (float,可选) - dropout 概率,指的是除第一层外每层输入时的 dropout 概率。范围为[0, 1]。默认为 0。
- **weight_ih_attr** (ParamAttr,可选) - weight_ih 的参数。默认为 None。
Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/nn/SimpleRNN_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ SimpleRNN
- **input_size** (int) - 输入 :math:`x` 的大小。
- **hidden_size** (int) - 隐藏状态 :math:`h` 大小。
- **num_layers** (int,可选) - 循环网络的层数。例如,将层数设为 2,会将两层 GRU 网络堆叠在一起,第二层的输入来自第一层的输出。默认为 1。
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。foward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。forward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
- **time_major** (bool,可选) - 指定 input 的第一个维度是否是 time steps。如果 time_major 为 True,则 Tensor 的形状为[time_steps,batch_size,input_size],否则为[batch_size,time_steps,input_size]。`time_steps` 指输入序列的长度。默认为 False。
- **dropout** (float,可选) - dropout 概率,指的是出第一层外每层输入时的 dropout 概率。范围为[0, 1]。默认为 0。
- **activation** (str,可选) - 网络中每个单元的激活函数。可以是 tanh 或 relu。默认为 tanh。
Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/static/nn/batch_norm_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ moving_mean 和 moving_var 是训练过程中统计得到的全局均值和方
参数
::::::::::::

- **input** (Tensor) - batch_norm 算子的输入特征,是一个 Tensor 类型,输入维度可以是 2, 3, 4, 5。数据类型:flaot16, float32, float64。
- **input** (Tensor) - batch_norm 算子的输入特征,是一个 Tensor 类型,输入维度可以是 2, 3, 4, 5。数据类型:float16, float32, float64。
- **act** (string)- 激活函数类型,可以是 leaky_realu、relu、prelu 等。默认:None。
- **is_test** (bool) - 指示它是否在测试阶段,非训练阶段使用训练过程中统计到的全局均值和全局方差。默认:False。
- **momentum** (float|Tensor)- 此值用于计算 moving_mean 和 moving_var,是一个 float 类型或者一个 shape 为[1],数据类型为 float32 的 Tensor 类型。更新公式为::math:`moving\_mean = moving\_mean * momentum + new\_mean * (1. - momentum)` , :math:`moving\_var = moving\_var * momentum + new\_var * (1. - momentum)`,默认:0.9。
Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/static/nn/conv3d_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ conv3d
::::::::::::

- **input** (Tensor) - 形状为 :math:`[N, C, D, H, W]` 或 :math:`[N, D, H, W, C]` 的 5-D Tensor,N 是批尺寸,C 是通道数,D 是特征深度,H 是特征高度,W 是特征宽度,数据类型为 float16, float32 或 float64。
- **num_fliters** (int) - 滤波器(卷积核)的个数。和输出图像通道相同。
- **num_filters** (int) - 滤波器(卷积核)的个数。和输出图像通道相同。
- **filter_size** (int|list|tuple) - 滤波器大小。如果它是一个列表或元组,则必须包含三个整数值:(filter_size_depth, filter_size_height,filter_size_width)。若为一个整数,则 filter_size_depth = filter_size_height = filter_size_width = filter_size。
- **stride** (int|list|tuple,可选) - 步长大小。滤波器和输入进行卷积计算时滑动的步长。如果它是一个列表或元组,则必须包含三个整型数:(stride_depth, stride_height, stride_width)。若为一个整数,stride_depth = stride_height = stride_width = stride。默认值:1。
- **padding** (int|list|tuple|str,可选) - 填充大小。如果它是一个字符串,可以是"VALID"或者"SAME",表示填充算法,计算细节可参考上述 ``padding`` = "SAME"或 ``padding`` = "VALID" 时的计算公式。如果它是一个元组或列表,它可以有 3 种格式:
Expand Down
2 changes: 1 addition & 1 deletion docs/design/concepts/tensor_array.md
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ Since each step of RNN can only take a tensor-represented batch of data as input
some preprocess should be taken on the inputs such as sorting the sentences by their length in descending order and cut each word and pack to new batches.

Such cut-like operations can be embedded into `TensorArray` as general methods called `unpack` and `pack`,
these two operations are similar to `stack` and `unstack` except that they operate on variable-length sequences formated as a LoD tensor rather than a tensor.
these two operations are similar to `stack` and `unstack` except that they operate on variable-length sequences formatted as a LoD tensor rather than a tensor.

Some definitions are like

Expand Down
2 changes: 1 addition & 1 deletion docs/design/concurrent/channel.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
## Introduction

A Channel is a data structure that allows for synchronous interprocess
communication via message passing. It is a fundemental component of CSP
communication via message passing. It is a fundamental component of CSP
(communicating sequential processes), and allows for users to pass data
between threads without having to worry about synchronization.

Expand Down
2 changes: 1 addition & 1 deletion docs/design/mkldnn/inplace/inplace.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ Pattern is restricted so that in-placed to be op is of oneDNN type. Due to fact
more than one input and their output may be consumed by more than one operator it is expected that pattern
maybe detected multiple times for the same operator e.g. once for one input, then for second input etc..

Just having oneDNN operator capable of in-place is not enough to have in-place execution enabled, hence follwing rules
Just having oneDNN operator capable of in-place is not enough to have in-place execution enabled, hence following rules
are checked by oneDNN in-place pass:
1. If input node to in-place operator is also an input to different operator, then in-place computation cannot be performed, as there is a risk that other operator consuming in-placed op operator will be executed after in-placed operator and therefore get invalid input data (overwritten by in-place computation).
2. If after in-placed operator there is another operator that is reusing in-place op's input var then in-place cannot happen unless next op can perform in-place computation. Next picture presents the idea.
Expand Down
2 changes: 1 addition & 1 deletion docs/design/modules/prune.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Pruning need to support both variables and operators being evaluation targets. C
different situations.

```python
# Case 1: run foward pass.
# Case 1: run forward pass.
cost_np = session.run(target=cost)
# Case 2: run backward passing.
opts_np, _ = session.run(target=[cost, opt])
Expand Down
2 changes: 1 addition & 1 deletion docs/dev_guides/custom_device_docs/runtime_data_type_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ typedef enum {

C_SUCCESS - The returned value when the execution of the function is a success

C_WARNING - The returned value when the performance of the funtion falls short of expectations. For example, the asynchronous API is actually synchronous.
C_WARNING - The returned value when the performance of the function falls short of expectations. For example, the asynchronous API is actually synchronous.

C_FAILED - Resources runs out or the request fails.

Expand Down
2 changes: 1 addition & 1 deletion docs/dev_guides/git_guides/local_dev_guide_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ Check for merge conflicts................................................Passed
Check for broken symlinks................................................Passed
Detect Private Key...................................(no files to check)Skipped
Fix End of Files.....................................(no files to check)Skipped
clang-formater.......................................(no files to check)Skipped
clang-formatter.......................................(no files to check)Skipped
[my-cool-stuff c703c041] add test file
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 233
Expand Down
2 changes: 1 addition & 1 deletion docs/guides/advanced/customize_cn.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@
" - label:单个或批次训练数据对应的标签数据\n",
" 接口返回值是一个Tensor,根据需要将所有x和label计算得到的loss值求和或取均值\n",
" \"\"\"\n",
" # 返回forword中计算的结果\n",
" # 返回forward中计算的结果\n",
" # output = xxxxx\n",
" # return output"
]
Expand Down
2 changes: 1 addition & 1 deletion docs/guides/custom_op/new_python_op_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None):

若前向函数的输入为 `x_1`, `x_2`, ..., `x_n` ,输出为`y_1`, `y_2`, ..., `y_m`,则前向函数的定义格式为:
```Python
def foward_func(x_1, x_2, ..., x_n):
def forward_func(x_1, x_2, ..., x_n):
...
return y_1, y_2, ..., y_m
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ torch.Tensor.requires_grad_(requires_grad=True)
paddle.Tensor.stop_gradient = False
```

两者功能一致,torch 为 funtion 调用方式,paddle 为 attribution 赋值方式,具体如下:
两者功能一致,torch 为 function 调用方式,paddle 为 attribution 赋值方式,具体如下:
### 参数映射

| PyTorch | PaddlePaddle | 备注 |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,5 +12,5 @@ Paddle 无此 API,需要组合实现。
torch.nn.functional.softmin(input, dim=1)

# Paddle 写法
paddle.nn.functinal.softmax(-input, axis=1)
paddle.nn.functional.softmax(-input, axis=1)
```