-
Notifications
You must be signed in to change notification settings - Fork 472
Description
您好,我用sh finetune/finetune_ds.sh进行全参微调,训练完后文件结构如图所示
tokenizer = AutoTokenizer.from_pretrained("./output_qwen", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("./output_qwen", device_map="cuda", trust_remote_code=True).eval()
model.generation_config = GenerationConfig.from_pretrained("./output_qwen", trust_remote_code=True)
query = tokenizer.from_list_format([
{'image': 'demo.jpeg'}, # Either a local path or an url
{'text': '这是什么?'},
])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)
运行报错,如下:
File "/data/mis/LQ/Code/LLaVA-Surgery/Qwen-VL/qw_test.py", line 16, in
response, history = model.chat(tokenizer, query=query, history=None)
File "/home/mis/.cache/huggingface/modules/transformers_modules/output_qwen/modeling_qwen.py", line 947, in chat
outputs = self.generate(
File "/home/mis/.cache/huggingface/modules/transformers_modules/output_qwen/modeling_qwen.py", line 1066, in generate
return super().generate(
File "/home/mis/miniconda3/envs/Qwen/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/mis/miniconda3/envs/Qwen/lib/python3.9/site-packages/transformers/generation/utils.py", line 1642, in generate
return self.sample(
File "/home/mis/miniconda3/envs/Qwen/lib/python3.9/site-packages/transformers/generation/utils.py", line 2760, in sample
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
RuntimeError: probability tensor contains either inf, nan or element < 0
我的transformers版本是4.32.0的