Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

env issue #9

Open
TT2TER opened this issue Aug 24, 2024 · 6 comments
Open

env issue #9

TT2TER opened this issue Aug 24, 2024 · 6 comments

Comments

@TT2TER
Copy link

TT2TER commented Aug 24, 2024

ERROR: Could not find a version that satisfies the requirement byted-cruise==0.7.3 (from versions: none)
ERROR: No matching distribution found for byted-cruise==0.7.3

this problem happens when using python 3.10 and 3.12

@Sierkinhane
Copy link
Collaborator

Sierkinhane commented Aug 24, 2024

Hi, just remove this dependency.

@Hasnat79
Copy link
Contributor

Hasnat79 commented Aug 28, 2024

I also faced similar issues for deepspeed, iotop.
python: 3.9

@Sierkinhane
Copy link
Collaborator

Hi, there are some internal packages and you can just remove those dependencies. What about the issue of deepspeed?

@Hasnat79
Copy link
Contributor

Hi, solved the problem. I had removed some packages and could run the infer cli command. Right now I'm trying ot decouple the inference in a notebook.
The problem that I am facing is when I try to run the following portion:

input_ids_llava = torch.cat([
                        (torch.ones(input_ids.shape[0], 1) *uni_prompting.sptids_dict['<|mmu|>']).to(device),
                        input_ids_system,
                        (torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|soi|>']).to(device),
                        # place your img embedding here
                        (torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|eoi|>']).to(device),
                        input_ids,
                ], dim=1).long()

I get Runtime error.
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument tensors in method wrapper_CUDA_cat)

Any idea on this ?

@Sierkinhane
Copy link
Collaborator

Looks like the tensors input_ids_system and input_ids are not on the same device. You can manually put them to GPU.

I didn't encounter such a problem.

@Hasnat79
Copy link
Contributor

I solved the problem by changing input_ids_system and input_ids to following code:
input_ids_system = input_ids_system.clone().detach().to(device)

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants