diff --git a/notebooks/llms.livemd b/notebooks/llms.livemd index 363336c1..972b71eb 100644 --- a/notebooks/llms.livemd +++ b/notebooks/llms.livemd @@ -23,7 +23,7 @@ In this section we look at running [Meta's Llama](https://ai.meta.com/llama/) mo -> **Note:** this is a very involved model, so the generation can take a long time if you run it on a CPU. Also, running on the GPU currently requires at least 16.3GiB of VRAM. +> **Note:** this is a very involved model, so the generation can take a long time if you run it on a CPU. Also, running on the GPU currently requires at least 16GiB of VRAM.