Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.
-
Updated
Feb 5, 2024 - Python
Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.
[ACL 2024 Main] NewsBench: A Systematic Evaluation Framework for Assessing Editorial Capabilities of Large Language Models in Chinese Journalism
LLM API Server , OpenAI 同时支持 ChatGLM3 ,Llama, Llama-3, Firefunction, Openfunctions ,BAAI/bge-m3 ,bge-large-zh-v1.5
使用 FastAPI 搭配預訓練 CodeGeeX2 模型,能通過應用在 API Hook 讓 AI 智能生成代碼
Add a description, image, and links to the chatglm3-6b topic page so that developers can more easily learn about it.
To associate your repository with the chatglm3-6b topic, visit your repo's landing page and select "manage topics."