-
Notifications
You must be signed in to change notification settings - Fork 29.5k
Refactor Bamba tests to inherit from CausalLMModelTester base classes #38587
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Update BambaModelTester to inherit from CausalLMModelTester - Update BambaModelTest to inherit from CausalLMModelTest - Remove duplicate methods already implemented in base classes - Fix method signatures and attribute names for compatibility - Maintain Bamba-specific functionality while leveraging base class code This change reduces code duplication and follows the pattern established in other causal LM models like Gemma.
GraniteMoeHybrid inherits from Bamba tests but doesn't have mlp_bias in its config. This adds mlp_bias=False to the test config to match Bamba's default value and fix the AttributeError.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
batch_size=batch_size, | ||
seq_length=seq_length, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As my comment in #38475, we can avoid to batch_size
, seq_length
here and in the arguments above, as the values are the same as the default ones in CausalLMModelTester
.
Maybe the same for (many) several attributes/arguments.
base_model_class = BioGptModel | ||
for_causal_lm_class = BioGptForCausalLM | ||
causal_lm_class = BioGptForCausalLM | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment as above for the init below
mamba_d_state=16, | ||
mamba_d_conv=4, | ||
mamba_expand=2, | ||
mamba_dt_rank="auto", | ||
mamba_conv_bias=True, | ||
mamba_proj_bias=False, | ||
mamba_inner_layernorms=True, | ||
n_mamba_heads=2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure why we have mamba here while this is granitemoebird test file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah ok, i see we have
class GraniteMoeHybridModelTester(BambaModelTester)
sorry
@unittest.skip(reason="GraniteMoeHybrid has different cache handling") | ||
def test_decoder_model_past_with_large_inputs(self): | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From code, I don't see this is skipped on current main
?
Another OpenHands + Opus 4:
What does this PR do?
This PR refactors the Bamba model tests to inherit from the
CausalLMModelTester
andCausalLMModelTest
base classes, following the pattern established in other causal LM models like Gemma. This reduces code duplication and makes the tests more maintainable.Changes made:
BambaModelTester
to inherit fromCausalLMModelTester
BambaModelTest
to inherit fromCausalLMModelTest
create_and_check_model
create_and_check_for_causal_lm
BambaModelTest
Testing:
All tests pass successfully:
pytest tests/models/bamba/test_modeling_bamba.py::BambaModelTest
- 93 passed, 130 skippedmake fixup
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
EDIT: Told the bot not to ping Arthur (sorry Arthur)