A simple and "tiny" implementation of many multimodal models. It supports training/finetuning/deploying these tiny-sized models. Unlike the popular "large" models, all the models in this repo will be restricted to train on my RTX 3080 Ti so the implementation will not be totally the same to the original papers.
conda create -n tinym python=3.10
conda activate tinym
git clone git@github.com:RobinDong/tiny_multimodal.git
cd tiny_multimodal
python -m pip install -r requirements.txt
Download conceptual-12m from Huggingface to directory cc12m-wds
.
Use utils/extract_tars.py
to convert CC12M to ready-to-use format:
python utils/extract_tars.py --input_path=<YOUR_DIR>/cc12m-wds/ --output_path=<YOUR_OUTPUT_PATH> --jobs=<YOUR_CPU_CORES>
python train.py --provider CLIP
This repo is still in developing. Please be patient for more multi-modal models.
Any issue or pull request is welcome.