-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Triplet Margin Loss] Issue 1118 #1120
base: main
Are you sure you want to change the base?
Conversation
@vroulet May I know if there's anything that needs to be changed? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @cvnad1 for doing this! Sorry for the delay. Here are some comments
optax/losses/_self_supervised.py
Outdated
anchor: The anchor embeddings. Shape: [batch_size, feature_dim]. | ||
positive: The positive embeddings. Shape: [batch_size, feature_dim]. | ||
negative: The negative embeddings. Shape: [batch_size, feature_dim]. | ||
margin: The margin value. Default: 1.0. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to put the default values since they are given in the signature.
optax/losses/_self_supervised.py
Outdated
by V. Balntas et al. Default: False. | ||
reduction: Specifies the reduction to apply to the output: | ||
'none' | 'mean' | 'sum'. Default: 'mean'. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add reference
optax/losses/_self_supervised.py
Outdated
margin: The margin value. Default: 1.0. | ||
p: The norm degree for pairwise distance. Default: 2. | ||
eps: Small epsilon value to avoid numerical issues. Default: 1e-6. | ||
swap: Use the distance swap optimization from "Learning shallow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use rst formatting for references (see e.g. the docstring of Adam)
optax/losses/_self_supervised.py
Outdated
swap: bool = False, | ||
reduction: str = 'mean', | ||
) -> chex.Array: | ||
"""Triplet margin loss function. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add an example (doctest)
@@ -53,5 +53,41 @@ def test_batched(self): | |||
) | |||
|
|||
|
|||
class TripletMarginLossTest(chex.TestCase): | |||
|
|||
def setUp(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid using numerical values as expected returns.
They may fail depending on the backend for example.
You may consider simple test cases with a "handmade" function (see e.g. the lbfgs tests). You can check for specific inputs (like zeros or ones).
You may also add a test for some specific behaviors (like using swap here).
Also you should test this function under jit/vmap etc... (see the chex.all_variant utility in some other tests).
@vroulet we have worked on your suggestion and all the tests are passing. I think the code is ready to be merged. |
@vroulet |
@vroulet we tried multiple things to solve the error in pipeline, the tests for triplet_loss are passing locally. The errors we are getting here seems to be not from the function we implemented. Can you guide us on this? |
Hello @cvnad1 , @Saanidhyavats , |
@vroulet We have modified the code based on your review. Could you please verify if everything's correct? |
@vroulet Hi Vincent, Added code and tests for the Triplet Margin Loss Function #1118 . Kindly review the code and please do comment in case of any changes.