Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc] minor update to node classification tutorial #7840

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

FilippoGuarda
Copy link

Description

minor change in tutorial comment for enabling gpu training, changing the code without sending the model to gpu raises error

Checklist

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [$CATEGORY] (such as [NN], [Model], [Doc], [Feature]])

  • The PR is complete and small, read the Google eng practice (CL equals to PR) to understand more about small PR. In DGL, we consider PRs with less than 200 lines of core code change are small (example, test and documentation could be exempted).

  • Code is well-documented

  • To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

  • Related issue is referred in this PR

Changes

  • changed tutorial comment to send both model and data to gpu when training trough it

Rhett-Ying and others added 4 commits May 6, 2024 10:02
Co-authored-by: Muhammed Fatih BALIN <m.f.balin@gmail.com>
Co-authored-by: Xinyu Yao <77922129+yxy235@users.noreply.github.com>
Co-authored-by: Ubuntu <ubuntu@ip-172-31-0-133.us-west-2.compute.internal>
if not both data and model are sent to gpu the code raises error 
``` bash 
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! when resuming training ```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants