Skip to content

Commit 699eb2f

Browse files
authored
few typos and broken link fixes in the documentation (#2656)
* typo, (fr) processus instead of (en) process * Fix broken link RLHF * broken link [core blocks] twice
1 parent 550a44d commit 699eb2f

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

docs/src/pages/introduction.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ special `<|EndofText|>` token or runs out of context.
3131
### Context Size
3232

3333
To predict the next token, Transformer-based large language models will _attend_ (look at all
34-
previous tokens, the prompt) with a processus called _Attention_. This computationally expensive
34+
previous tokens, the prompt) with a process called _Attention_. This computationally expensive
3535
process imposes constraints on the maximum amount of text a model can operate with. This maximum
3636
length is referred to as its **context size**. Each large language model has a specific
3737
context size, but it generally consists of 4000 tokens (~4000 words). Some have 2000, others
@@ -109,7 +109,7 @@ viable the approach described above.
109109

110110
Along with a well thought-out UI, improvements in instruction-following was one of the core advances
111111
of ChatGPT, possibly explaining its rapid success. ChatGPT relied on a novel instruction-following
112-
fine-tuning paradigm called [reinforcement learning from human feedback](#).
112+
fine-tuning paradigm called [reinforcement learning from human feedback](https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback).
113113

114114
### Fine-tuning
115115

docs/src/pages/overview.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ Dust apps can also define datasets which are arrays of JSON objects. Datasets' m
130130
- Store example inputs on which the app is run during its design. These datasets are pulled from
131131
`input` blocks.
132132
- Store few-shot examples used when prompting models. These datasets are made available to `llm`
133-
blocks through the use of `data` blocks (see [core blocks](#)).
133+
blocks through the use of `data` blocks (see [core blocks](/core-blocks)).
134134

135135
All datasets are automatically versioned and each app version points to their specific dataset
136136
version.
@@ -142,7 +142,7 @@ Each input's execution trace is completely independent and these cannot be cross
142142
runs of an app are stored along with the app version and all the associated block outputs. They can
143143
be retrieved from the `Runs` panel.
144144

145-
Other [core blocks](#) allow the further parallelization of the execution such as the `map` and
145+
Other [core blocks](/core-blocks) allow the further parallelization of the execution such as the `map` and
146146
`reduce` blocks. Dust's execution engine will also take care of automatically parallelizing
147147
execution eagearly when they are used.
148148

0 commit comments

Comments
 (0)