Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix typos in TextCNN.jpynb #3320

Merged
merged 1 commit into from
Jan 3, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions examples/notebooks/TextCNN.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -426,7 +426,7 @@
"batch_size = 8 # A batch size of 8\n",
"\n",
"def create_iterators(batch_size=8):\n",
" \"\"\"Heler function to create the iterators\"\"\"\n",
" \"\"\"Helper function to create the iterators\"\"\"\n",
" dataloaders = []\n",
" for split in [train_list, validation_list, test_list]:\n",
" dataloader = DataLoader(\n",
Expand Down Expand Up @@ -695,7 +695,7 @@
"Similar to the training process function, we set up a function to evaluate a single batch. Here is what the eval_function does:\n",
"\n",
"* Sets model in eval mode.\n",
"* With torch.no_grad(), no gradients are calculated for any succeding steps.\n",
"* With torch.no_grad(), no gradients are calculated for any succeeding steps.\n",
"* Generates x and y from batch.\n",
"* Performs a forward pass on the model to calculate y_pred based on model and x.\n",
"* Returns y_pred and y.\n",
Expand Down Expand Up @@ -1002,4 +1002,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
}
}
Loading