4.31.0 is preferred.
Please check if you have updated the code to the latest, and correctly downloaded all the sharded checkpoint files.
This is the merge file of the tokenizer. You have to download it. Note that if you just git clone the repo without git-lfs, you cannot download this file.
Run the command pip install -r requirements.txt
. You can find the file at https://github.com/QwenLM/Qwen-VL/blob/main/requirements.txt.
Yes, see web_demo_mm.py
for web demo. See README for more information.
No. We do not support streaming yet.
Please check if you are loading Qwen-VL-Chat instead of Qwen-VL. Qwen-VL is the base model without alignment, which behaves differently from the SFT/Chat model.
No. We would support quantization asap.
Please ensure that NTK is applied. use_dynamc_ntk
and use_logn_attn
in config.json
should be set to true
(true
by default).
In our training, we only use <|endoftext|>
as the separator and padding token. You can set bos_id, eos_id, and pad_id to tokenizer.eod_id. Learn more about our tokenizer from our documents about the tokenizer.