-
Notifications
You must be signed in to change notification settings - Fork 27.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix XGLM loss computation (PyTorch and TensorFlow) #35878
base: main
Are you sure you want to change the base?
Conversation
Hi @damianoamatruda, yes, the original code is incorrect! However, a simpler fix would be to change the label padding value to |
8df8015
to
180e037
Compare
Hi @Rocketknight1, thank you for the clear explanation! I've updated the PR to shift only the labels, as previously done, and replaced the padding token with the mask value I've also updated the PyTorch test to match the changes introduced in the newly merged PR #35659. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, LGTM now! cc @ArthurZucker for core maintainer review
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
run-slow: xglm |
This comment contains run-slow, running the specified jobs: ['models/xglm'] ... |
Hi @damianoamatruda I'm seeing some failures in the slow tests for XGLM, can you take a look? You can check the CI logs, or to run slow tests locally, you can do something like |
76b2e4e
to
0ca7d4f
Compare
Hi @Rocketknight1, I took a look at the errors, which were related to XGLM and similar models but weren't connected to the loss computation. I fixed them by taking inspiration from Now, however, with the latest rebase, there are failed tests that aren't related to XGLM. Can you do something about it? |
Hi @damianoamatruda, I'm not sure exactly what's causing that! It's likely those tests were just flaky on a past commit - can you try rebasing again? If they still won't go away then I'll see if we can actually fix or skip them on |
0ca7d4f
to
33f7ed6
Compare
@Rocketknight1, I rebased and the test |
76911dc
to
fbf492c
Compare
Yeah, that test is a problem on |
fbf492c
to
740b5b9
Compare
Tests are finally green! Pinging @Cyrilvallez for core maintainer review |
Great! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey! All LGTM concerning the loss part!
However, I must say that I am skeptical concerning the change in set/get embeddings. It looks like we are changing input type to the functions here (layer vs underlying layer data), which may be breaking for current code. Moreover, the failing test explicitly states that it is expected to fail (and it was never fixed).
TLDR I'd rather we revert the part on embeddings, and keep the loss part 🤗
This updates the expected output string of test_xglm_sample for torch 2.0 to the correct one and removes the one for torch 1.13.1 + cu116 (transformers moved to torch 2.0 with PR huggingface#35358).
740b5b9
to
88b01fd
Compare
Hi @Cyrilvallez, done! The tests now pass without requiring the commits for the embeddings. How did you fix/disable the failing tests? Thank you for the review. |
What does this PR do?
This PR fixes the loss computation for XGLM in both PyTorch and TensorFlow implementations.
The labels were shifted by one and the padding token was appended, causing artificial loss contributions, inconsistencies between non-padded and right-padded sequences, and potential bias toward predicting padding tokens.
The updated implementations ignore the last logit and do not append the padding token to the labels, aligning with the behavior in GPT-2 and other models.
The logic of the computation was first identified in #22540, where it was ported from the PyTorch implementation to the TensorFlow one for consistency. In this PR I've reverted the TensorFlow implementation to its previous valid behavior and I've updated the PyTorch implementation to match it.
I've also added XGLM tests to ensure that the losses of non-padded and padded inputs match.
This bug was discovered in a joint project while collaborating with @mdrpanwar and @ayushkumartarun.
Who can review?
@Rocketknight1 @gante @ArthurZucker