Skip to content

Incompatibility of Newly Released Large Parameter Models (650M & 111M) with Existing Code #11

@Gengsheng-Li

Description

@Gengsheng-Li

Dear Authors,

Hello! Thank you very much for open-sourcing this incredibly inspiring project. While using the models provided, we noticed that the newly released 650M and 111M models seem to be incompatible with the existing code. The issue we encountered is as follows:

The error occurs in the BrainLMDecoder class's forward function within the brainlm_mae/modeling_brainlm.py file:

File "/data0/user/gsli/BrainLM/brainlm_mae/modeling_brainlm.py", line 716, in forward
    decoder_outputs = self.decoder(latent, xyz_vectors, ids_restore)
File "/data0/user/gsli/.conda/envs/brainlm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
File "/data0/user/gsli/BrainLM/brainlm_mae/modeling_brainlm.py", line 508, in forward
    x_ = x_ + xyz_projection
RuntimeError: The size of tensor a (512) must match the size of tensor b (1280) at non-singleton dimension 3

This error did not occur when using the old 13M model. Is there an updated version of the code compatible with the new models? Or could you provide some guidance on how to resolve this issue?

Once again, thank you for your excellent work and for sharing it with the community!

Looking forward to your response.

Best regards,
Gengsheng Li

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions