Skip to content
#

video-language-model

Here are 4 public repositories matching this topic...

Language: All
Filter by language

Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.

  • Updated Sep 24, 2024
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the video-language-model topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the video-language-model topic, visit your repo's landing page and select "manage topics."

Learn more