You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
我執行了finetune_pt_multiturn.sh、並使用我自己的資料集finetune
之後在Terminal執行 !SET MODEL_PATH=THUDM/chatglm3-6b !SET PT_PATH=./output/kirin_v0.8-20231230-044633-128-2e-2/checkpoint-1000/ !streamlit run ./ChatGLM3/composite_demo/main.py
雖然streamlit的頁面正常啟動,但當我在對話框內輸入任意文字並送出都會得到
完整的error log如下:
== Input ==
hi
==History==
[{'role': 'system', 'content': "You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown."}, {'role': 'user', 'content': 'hi'}, {'role': 'user', 'content': 'hi'}, {'role': 'user', 'content': 'hi'}]
2024-01-06 15:24:39.197 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 534, in _run_script
exec(code, module.dict)
File "C:\Users\user\Desktop\ai\ChatGLM3\composite_demo\main.py", line 72, in
demo_chat.main(
File "C:\Users\user\Desktop\ai./ChatGLM3/composite_demo\demo_chat.py", line 62, in main
for response in client.generate_stream(
File "C:\Users\user\Desktop\ai./ChatGLM3/composite_demo\client.py", line 182, in generate_stream
for new_text, _ in stream_chat(
File "C:\Users\user\Desktop\ai./ChatGLM3/composite_demo\client.py", line 110, in stream_chat
for outputs in self.stream_generate(**inputs, past_key_values=past_key_values,
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\utils_contextlib.py", line 35, in generator_context
response = gen.send(None)
File "C:\Users\user.cache\huggingface\modules\transformers_modules\THUDM\chatglm3-6b\b098244a71fbe69ce149682d9072a7629f7e908c\modeling_chatglm.py", line 1159, in stream_generate
outputs = self(
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "C:\Users\user.cache\huggingface\modules\transformers_modules\THUDM\chatglm3-6b\b098244a71fbe69ce149682d9072a7629f7e908c\modeling_chatglm.py", line 937, in forward
transformer_outputs = self.transformer(
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\user.cache\huggingface\modules\transformers_modules\THUDM\chatglm3-6b\b098244a71fbe69ce149682d9072a7629f7e908c\modeling_chatglm.py", line 811, in forward
past_key_values = self.get_prompt(batch_size=batch_size, device=input_ids.device,
File "C:\Users\user.cache\huggingface\modules\transformers_modules\THUDM\chatglm3-6b\b098244a71fbe69ce149682d9072a7629f7e908c\modeling_chatglm.py", line 773, in get_prompt
past_key_values = self.prefix_encoder(prefix_tokens).type(dtype)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "C:\Users\user.cache\huggingface\modules\transformers_modules\THUDM\chatglm3-6b\b098244a71fbe69ce149682d9072a7629f7e908c\modeling_chatglm.py", line 89, in forward
past_key_values = self.embedding(prefix)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\accelerate\hooks.py", line 160, in new_forward
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\accelerate\hooks.py", line 294, in pre_forward
module, name, self.execution_device, value=self.weights_map[name], fp16_statistics=fp16_statistics
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\accelerate\utils\offload.py", line 118, in getitem
return self.dataset[f"{self.prefix}{key}"]
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\accelerate\utils\offload.py", line 165, in getitem
weight_info = self.index[key]
KeyError: 'transformer.prefix_encoder.embedding.weight'
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
我執行了finetune_pt_multiturn.sh、並使用我自己的資料集finetune
之後在Terminal執行
!SET MODEL_PATH=THUDM/chatglm3-6b !SET PT_PATH=./output/kirin_v0.8-20231230-044633-128-2e-2/checkpoint-1000/ !streamlit run ./ChatGLM3/composite_demo/main.py
雖然streamlit的頁面正常啟動,但當我在對話框內輸入任意文字並送出都會得到
完整的error log如下:
== Input ==
hi
==History==
[{'role': 'system', 'content': "You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown."}, {'role': 'user', 'content': 'hi'}, {'role': 'user', 'content': 'hi'}, {'role': 'user', 'content': 'hi'}]
2024-01-06 15:24:39.197 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 534, in _run_script
exec(code, module.dict)
File "C:\Users\user\Desktop\ai\ChatGLM3\composite_demo\main.py", line 72, in
demo_chat.main(
File "C:\Users\user\Desktop\ai./ChatGLM3/composite_demo\demo_chat.py", line 62, in main
for response in client.generate_stream(
File "C:\Users\user\Desktop\ai./ChatGLM3/composite_demo\client.py", line 182, in generate_stream
for new_text, _ in stream_chat(
File "C:\Users\user\Desktop\ai./ChatGLM3/composite_demo\client.py", line 110, in stream_chat
for outputs in self.stream_generate(**inputs, past_key_values=past_key_values,
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\utils_contextlib.py", line 35, in generator_context
response = gen.send(None)
File "C:\Users\user.cache\huggingface\modules\transformers_modules\THUDM\chatglm3-6b\b098244a71fbe69ce149682d9072a7629f7e908c\modeling_chatglm.py", line 1159, in stream_generate
outputs = self(
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "C:\Users\user.cache\huggingface\modules\transformers_modules\THUDM\chatglm3-6b\b098244a71fbe69ce149682d9072a7629f7e908c\modeling_chatglm.py", line 937, in forward
transformer_outputs = self.transformer(
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\user.cache\huggingface\modules\transformers_modules\THUDM\chatglm3-6b\b098244a71fbe69ce149682d9072a7629f7e908c\modeling_chatglm.py", line 811, in forward
past_key_values = self.get_prompt(batch_size=batch_size, device=input_ids.device,
File "C:\Users\user.cache\huggingface\modules\transformers_modules\THUDM\chatglm3-6b\b098244a71fbe69ce149682d9072a7629f7e908c\modeling_chatglm.py", line 773, in get_prompt
past_key_values = self.prefix_encoder(prefix_tokens).type(dtype)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "C:\Users\user.cache\huggingface\modules\transformers_modules\THUDM\chatglm3-6b\b098244a71fbe69ce149682d9072a7629f7e908c\modeling_chatglm.py", line 89, in forward
past_key_values = self.embedding(prefix)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\accelerate\hooks.py", line 160, in new_forward
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\accelerate\hooks.py", line 294, in pre_forward
module, name, self.execution_device, value=self.weights_map[name], fp16_statistics=fp16_statistics
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\accelerate\utils\offload.py", line 118, in getitem
return self.dataset[f"{self.prefix}{key}"]
File "C:\Users\user\anaconda3\envs\kirin\lib\site-packages\accelerate\utils\offload.py", line 165, in getitem
weight_info = self.index[key]
KeyError: 'transformer.prefix_encoder.embedding.weight'
想請問各位大佬該怎麼解決這個問題
Beta Was this translation helpful? Give feedback.
All reactions