Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed failing tests for mps device #3143

Closed
wants to merge 2 commits into from
Closed

Fixed failing tests for mps device #3143

wants to merge 2 commits into from

Conversation

pranavvp16
Copy link
Contributor

Draft PR for failing mps tests.

Copy link
Contributor

sweep-ai bot commented Nov 22, 2023

Apply Sweep Rules to your PR?

  • Apply: All new business logic should have corresponding unit tests.
  • Apply: Refactor large functions to be more modular.
  • Apply: Add docstrings to all functions and file headers.

@@ -22,6 +22,8 @@ def test_no_distrib(capsys):
assert idist.backend() is None
if torch.cuda.is_available():
assert idist.device().type == "cuda"
elif torch.backends.mps.is_available():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have to put a guard _torch_version_le_112 here as we have also tests for older pytorch version where mps backend does not exist.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, to run the test, you need to remove @pytest.mark.skipif

@@ -43,6 +45,8 @@ def test_no_distrib(capsys):
assert "ignite.distributed.utils INFO: backend: None" in out[-1]
if torch.cuda.is_available():
assert "ignite.distributed.utils INFO: device: cuda" in out[-1]
elif torch.backends.mps.is_available():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here

def forward(self, x, bias=None):
if bias is None:
bias = 0.0
def forward(self, x):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's revert this change

@@ -69,8 +66,8 @@ def get_first_element(output):
optimizer = SGD(model.parameters(), 0.1)

if trace:
example_inputs = (torch.randn(1), torch.randn(1)) if with_model_fn else torch.randn(1)
model = torch.jit.trace(model, example_inputs)
example_input = torch.randn(1)
Copy link
Collaborator

@vfdev-5 vfdev-5 Nov 23, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, let's revert this. Probably, you need merge origin/master to your branch.
Also, it is a good practice to work on git branches and not on master (pranavvp16:master)

@pranavvp16 pranavvp16 closed this Nov 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants