Replies: 2 comments 4 replies
-
机器上有GPU吗?GPU显存多少? |
Beta Was this translation helpful? Give feedback.
3 replies
-
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
使用的是已经合并过的HF 33B模型.
python scripts/inference/inference_hf.py \ --base_model ~/chinese_alpaca_33b_bin \ --with_prompt --interactive
The model weights are not tied. Please use the
tie_weights
method before using theinfer_auto_device
function.Traceback (most recent call last):
File "/home/github/ymcui/Chinese-LLaMA-Alpaca/scripts/inference/inference_hf.py", line 62, in
base_model = LlamaForCausalLM.from_pretrained(
File "/home/anaconda3/envs/Chinese-LLaMA-Alpaca/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2795, in from_pretrained
) = cls._load_pretrained_model(
File "/home/anaconda3/envs/Chinese-LLaMA-Alpaca/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2889, in _load_pretrained_model
raise ValueError(
ValueError: The current
device_map
had weights offloaded to the disk. Please provide anoffload_folder
for them. Alternatively, make sure you havesafetensors
installed if the model you are using offers the weights in this format.Beta Was this translation helpful? Give feedback.
All reactions