You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The bug has not been fixed in the latest main branch
I have checked the latest main branch
Do you feel comfortable sharing a concise (minimal) script that reproduces the error? :)
Yes, I will share a minimal reproducible script.
🐛 Describe the bug
[rank0]: Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
[rank7]: Traceback (most recent call last):
[rank7]: File "lora_finetune.py", line 518, in
[rank7]: train(args)
[rank7]: File "lora_finetune.py", line 223, in train
[rank7]: model = booster.enable_lora(model, lora_config=lora_config, pretrained_dir=args.adapter)
[rank7]: File "/opt/conda/lib/python3.8/site-packages/colossalai/booster/booster.py", line 289, in enable_lora
[rank7]: return self.plugin.enable_lora(model, pretrained_dir, lora_config, bnb_quantization_config)
[rank7]: File "/opt/conda/lib/python3.8/site-packages/colossalai/booster/plugin/hybrid_parallel_plugin.py", line 1525, in enable_lora
[rank7]: peft_model = PeftModel.from_pretrained(model, pretrained_dir, is_trainable=True)
[rank7]: File "/opt/conda/lib/python3.8/site-packages/peft/peft_model.py", line 586, in from_pretrained
[rank7]: model.load_adapter(
[rank7]: File "/opt/conda/lib/python3.8/site-packages/peft/peft_model.py", line 1177, in load_adapter
[rank7]: adapters_weights = load_peft_weights(model_id, device=torch_device, **hf_hub_download_kwargs)
[rank7]: File "/opt/conda/lib/python3.8/site-packages/peft/utils/save_and_load.py", line 586, in load_peft_weights
[rank7]: adapters_weights = torch_load(filename, map_location=torch.device(device))
[rank7]: File "/opt/conda/lib/python3.8/site-packages/peft/utils/save_and_load.py", line 499, in torch_load
[rank7]: return torch.load(*args, weights_only=weights_only, **kwargs)
[rank7]: File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 1096, in load
[rank7]: raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
[rank7]: _pickle.UnpicklingError: Weights only load failed. Re-running torch.load with weights_only set to False will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
[rank7]: Please file an issue with the following so that we can make weights_only=True compatible with your use case: WeightsUnpickler error: Attempted to set the storage of a tensor on device "cpu" to a storage on different device "cuda:7". This is no longer allowed; the devices must match.
加载训好的lora模型进行继续训练,报错
Environment
torch 2.4.0
cuda12.1
The text was updated successfully, but these errors were encountered:
Is there an existing issue for this bug?
The bug has not been fixed in the latest main branch
Do you feel comfortable sharing a concise (minimal) script that reproduces the error? :)
Yes, I will share a minimal reproducible script.
🐛 Describe the bug
[rank0]: Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
[rank7]: Traceback (most recent call last):
[rank7]: File "lora_finetune.py", line 518, in
[rank7]: train(args)
[rank7]: File "lora_finetune.py", line 223, in train
[rank7]: model = booster.enable_lora(model, lora_config=lora_config, pretrained_dir=args.adapter)
[rank7]: File "/opt/conda/lib/python3.8/site-packages/colossalai/booster/booster.py", line 289, in enable_lora
[rank7]: return self.plugin.enable_lora(model, pretrained_dir, lora_config, bnb_quantization_config)
[rank7]: File "/opt/conda/lib/python3.8/site-packages/colossalai/booster/plugin/hybrid_parallel_plugin.py", line 1525, in enable_lora
[rank7]: peft_model = PeftModel.from_pretrained(model, pretrained_dir, is_trainable=True)
[rank7]: File "/opt/conda/lib/python3.8/site-packages/peft/peft_model.py", line 586, in from_pretrained
[rank7]: model.load_adapter(
[rank7]: File "/opt/conda/lib/python3.8/site-packages/peft/peft_model.py", line 1177, in load_adapter
[rank7]: adapters_weights = load_peft_weights(model_id, device=torch_device, **hf_hub_download_kwargs)
[rank7]: File "/opt/conda/lib/python3.8/site-packages/peft/utils/save_and_load.py", line 586, in load_peft_weights
[rank7]: adapters_weights = torch_load(filename, map_location=torch.device(device))
[rank7]: File "/opt/conda/lib/python3.8/site-packages/peft/utils/save_and_load.py", line 499, in torch_load
[rank7]: return torch.load(*args, weights_only=weights_only, **kwargs)
[rank7]: File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 1096, in load
[rank7]: raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
[rank7]: _pickle.UnpicklingError: Weights only load failed. Re-running
torch.load
withweights_only
set toFalse
will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.[rank7]: Please file an issue with the following so that we can make
weights_only=True
compatible with your use case: WeightsUnpickler error: Attempted to set the storage of a tensor on device "cpu" to a storage on different device "cuda:7". This is no longer allowed; the devices must match.加载训好的lora模型进行继续训练,报错
Environment
torch 2.4.0
cuda12.1
The text was updated successfully, but these errors were encountered: