Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_addmm) #573

Closed
Ch-rode opened this issue Jul 31, 2023 · 4 comments
Labels
bug Something isn't working

Comments

@Ch-rode
Copy link

Ch-rode commented Jul 31, 2023

Environment info

  • adapter-transformers version: 3.2.1
  • Platform: Linux-4.18.0-372.26.1.el8_6.x86_64-x86_64-with-glibc2.10
  • Python version: 3.8.5
  • Huggingface_hub version: 0.12.0
  • PyTorch version (GPU?): 1.13.1 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Details

Hello ! I have trained a model with adapter-hub and saved the checkpoints. However when I try to resume the training I have the following error:

trainer.train(resume_from_checkpoint=resume_from_checkpoint)  

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking  argument for argument mat1 in method wrapper_addmm)

I checked and inputs and model are correctly on cuda.
The traceback is the following:

trainer.train(resume_from_checkpoint=resume_from_checkpoint)                                                           
  File "/home/rodelc/anaconda3/envs/temBERTure_datavis/lib/python3.8/site-packages/transformers/trainer.py", line 1543, in 
train                                                                                                                      
    return inner_training_loop(                                                                                            
  File "/home/rodelc/anaconda3/envs/temBERTure_datavis/lib/python3.8/site-packages/transformers/trainer.py", line 1791, in 
_inner_training_loop                                                                                                       
    tr_loss_step = self.training_step(model, inputs)                                                                       
  File "/home/rodelc/anaconda3/envs/temBERTure_datavis/lib/python3.8/site-packages/transformers/trainer.py", line 2539, in 
training_step                                                                                                              
    loss = self.compute_loss(model, inputs)    
File "/home/rodelc/TemBERTure_Tm_regression/CONFIG4:WEIGHT_DECAY_0.2/code/train.py", line 31, in compute_loss            
    outputs = model(**inputs)                                                                              
  File "/home/rodelc/anaconda3/envs/temBERTure_datavis/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194,
in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/rodelc/anaconda3/envs/temBERTure_datavis/lib/python3.8/site-packages/transformers/adapters/models/bert/adapt$
r_model.py", line 85, in forward
    head_outputs = self.forward_head(
  File "/home/rodelc/anaconda3/envs/temBERTure_datavis/lib/python3.8/site-packages/transformers/adapters/heads/base.py", l$
ne 833, in forward_head
    return_output = head_module(all_outputs, cls_output, attention_mask, return_dict, **kwargs)
  File "/home/rodelc/anaconda3/envs/temBERTure_datavis/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194,
in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/rodelc/anaconda3/envs/temBERTure_datavis/lib/python3.8/site-packages/transformers/adapters/heads/base.py", l$
ne 143, in forward
    logits = super().forward(cls_output)
  File "/home/rodelc/anaconda3/envs/temBERTure_datavis/lib/python3.8/site-packages/torch/nn/modules/container.py", line 20$
, in forward
    input = module(input)
  File "/home/rodelc/anaconda3/envs/temBERTure_datavis/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194,
in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/rodelc/anaconda3/envs/temBERTure_datavis/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 114, $
n forward
    return F.linear(input, self.weight, self.bias)

and my code is this:

class RegressionTrainer(AdapterTrainer): 
    '''
    1.Extract the "labels" from the inputs dictionary using the pop method. This suggests that the input dictionary contains a key named "labels" that corresponds to the ground truth labels for the regression task.
    2.Pass the remaining inputs dictionary to the model to obtain the model's outputs.
    3.Extract the logits from the outputs tensor. It assumes that the model's output is a tensor, and it retrieves the logits corresponding to the first element of each sample ([:, 0]).
    4.Compute the mean squared error (MSE) loss between the logits and the labels using the torch.nn.functional.mse_loss function. This calculates the squared difference between the predicted values and the ground truth labels.
    The method returns either the computed loss alone (loss) or a tuple containing the loss and the model's outputs ((loss, outputs)) based on the return_outputs flag.'''

        training_args = TrainingArguments(
                    output_dir = OUTPUT_DIR, # + '/'+'weigth'+ str(WEIGHT_DECAY) + '_lr' + str(LEARNING_RATE),
                    learning_rate = LEARNING_RATE, #LEARNING_RATE,
                num_train_epochs = EPOCHS,
                    evaluation_strategy = "epoch",
                    save_strategy = "epoch",
                    save_total_limit=2,
                    metric_for_best_model="loss",
                    load_best_model_at_end=True,
                    weight_decay = WEIGHT_DECAY,
                    #eval_accumulation_steps=50,
                    fp16=True,
                    report_to='wandb',
                    save_on_each_node=True,
                    greater_is_better=False,
                    seed = 42,
                    
                    #callback = [EarlyStoppingCallback(early_stopping_patience=3,early_stopping_threshold=(-3))]
                    )
                
 trainer = RegressionTrainer(
            model=model,
            args=training_args,
            train_dataset=ds["train"],
            eval_dataset=ds["validation"],
            compute_metrics=compute_metrics_for_regression,
           
            
        )    
trainer.add_callbacks=[EarlyStoppingCallback(early_stopping_patience=3,early_stopping_threshold=(-3))]
        
model.train_adapter(['TemBERTure_adapter']) #
      
old_collator = trainer.data_collator
trainer.data_collator = lambda data: dict(old_collator(data))
    
 trainer.train(resume_from_checkpoint=resume_from_checkpoint)


  def compute_loss(self, model, inputs, return_outputs=False):
      labels = inputs.pop("labels")
      
      inputs_on_gpu = all(tensor.is_cuda for tensor in inputs.values())
      print("Are inputs on GPU?", inputs_on_gpu)
      model_on_gpu = next(model.parameters()).is_cuda
      print("Is model on GPU?", model_on_gpu)
      
      outputs = model(**inputs)
      logits = outputs[0][:, 0]
      loss = torch.nn.functional.mse_loss(logits, labels)
      return (loss, outputs) if return_outputs else loss

I have tried everything. Any ideas how to resume successfully the training? Thanks a lot!

@Ch-rode Ch-rode added the question Further information is requested label Jul 31, 2023
@julianpollmann
Copy link

Hey,

i had a similiar issue. I ended up running my training script just on one gpu:
CUDA_VISIBLE_DEVICES=0 python your_training_script.py
In a Jupyter Notebook:

import os
os.environ['CUDA_VISIBLE_DEVICES'] = "0"

This will limit multi-gpu training though.

@adapter-hub-bert
Copy link
Member

This issue has been automatically marked as stale because it has been without activity for 90 days. This issue will be closed in 14 days unless you comment or remove the stale label.

@adapter-hub-bert
Copy link
Member

This issue was closed because it was stale for 14 days without any activity.

@adapter-hub-bert adapter-hub-bert closed this as not planned Won't fix, can't repro, duplicate, stale Nov 30, 2023
@calpt
Copy link
Member

calpt commented Dec 1, 2023

This issue should be resolved with the release of the new Adapters library (v0.1.0). Please re-open and let us know if this is not the case. Thanks!

@calpt calpt added bug Something isn't working and removed question Further information is requested Stale labels Dec 1, 2023
@calpt calpt closed this as completed Dec 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants