Skip to content

Commit

Permalink
Set seed in test_learned_preference_objective to stop it from flaking (
Browse files Browse the repository at this point in the history
…#2145)

Summary:
## Motivation

* The test was flaky due to a varying amount of numerical error depending on random inputs, so I set a seed to a random number between 0 and 10
* Changed some data to double precision to avoid a warning

Pull Request resolved: #2145

Test Plan:
*  checked that the test passes for each seed between 0 and 10
*  I confirmed that there are seeds that do cause it to fail
* Increased the number of samples a lot to confirm that numerical error because small when the number of samples is large -- in other words, the error is due to a low number of samples

Reviewed By: Balandat

Differential Revision: D52002349

Pulled By: esantorella

fbshipit-source-id: c1908bdf649db0d51c8e8c2806b9e55258ffb855
  • Loading branch information
esantorella authored and facebook-github-bot committed Dec 9, 2023
1 parent f2003dd commit 9354fd7
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion test/acquisition/test_objective.py
Original file line number Diff line number Diff line change
Expand Up @@ -454,6 +454,8 @@ def _get_pref_model(
return pref_model

def test_learned_preference_objective(self) -> None:
seed = torch.randint(low=0, high=10, size=torch.Size([1]))
torch.manual_seed(seed)
pref_model = self._get_pref_model(dtype=torch.float64)

og_sample_shape = 3
Expand Down Expand Up @@ -492,7 +494,7 @@ def test_learned_preference_objective(self) -> None:
with self.assertRaisesRegex(
ValueError, "samples should have at least 3 dimensions."
):
pref_obj(torch.rand(q, self.x_dim))
pref_obj(torch.rand(q, self.x_dim, dtype=torch.float64))

# test when sampler has multiple preference samples
with self.subTest("Multiple samples"):
Expand Down

0 comments on commit 9354fd7

Please sign in to comment.