This project is developed and tested on Python3.10.
# Create and activate a python 3.10 environment.
conda create -n yolo-world-with-efficientvit-sam python=3.10 -y
conda activate yolo-world-with-efficientvit-sam
# Setup packages.
make setup
python app.py
Open http://127.0.0.1:7860/ on your web browser.
YOLO-World is an open-vocabulary object detection model with high efficiency.
On the challenging LVIS dataset, YOLO-World achieves 35.4 AP with 52.0 FPS on V100,
which outperforms many state-of-the-art methods in terms of both accuracy and speed.
EfficientViT SAM is a new family of accelerated segment anything models. Thanks to the lightweight and hardware-efficient core building block, it delivers 48.9× measured TensorRT speedup on A100 GPU over SAM-ViT-H without sacrificing performance.
![](https://private-user-images.githubusercontent.com/14961526/305708474-9eec003f-47c9-43a5-86b0-82d6689e1bf9.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjA1Njg0NDIsIm5iZiI6MTcyMDU2ODE0MiwicGF0aCI6Ii8xNDk2MTUyNi8zMDU3MDg0NzQtOWVlYzAwM2YtNDdjOS00M2E1LTg2YjAtODJkNjY4OWUxYmY5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MDklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzA5VDIzMzU0MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWVkYzE0Mzk5NDY5NGFjZjEwZGY5ZDNhNjFmZmI4MjhjNmM1MDZiMmFiMmFiODU5NWQ5ZjUxNWVlNTlmMjAzN2YmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.jaGzHbKiuha9gWSQax895StFT2S8PUaSpVTQFYWaQ74)
![](https://private-user-images.githubusercontent.com/14961526/305708433-d79973bb-0d80-4b64-a175-252de56d0d09.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjA1Njg0NDIsIm5iZiI6MTcyMDU2ODE0MiwicGF0aCI6Ii8xNDk2MTUyNi8zMDU3MDg0MzMtZDc5OTczYmItMGQ4MC00YjY0LWExNzUtMjUyZGU1NmQwZDA5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MDklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzA5VDIzMzU0MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTMxN2IwZjkxNzlkZTdiNGFjNTk1YjJiZDI5N2NjMDUwYTRiZWQ1M2JmMGQxMWQzYTAwNmY3YzU2ZmJhMDRiNzYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.oxQqhTp8hw-_rXhZQS15R6SSgs4U5DRRFZABKDQ6NXk)
@misc{zhang2024efficientvitsam,
title={EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss},
author={Zhuoyang Zhang and Han Cai and Song Han},
year={2024},
eprint={2402.05008},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@article{cheng2024yolow,
title={YOLO-World: Real-Time Open-Vocabulary Object Detection},
author={Cheng, Tianheng and Song, Lin and Ge, Yixiao and Liu, Wenyu and Wang, Xinggang and Shan, Ying},
journal={arXiv preprint arXiv:2401.17270},
year={2024}
}
@article{cai2022efficientvit,
title={Efficientvit: Enhanced linear attention for high-resolution low-computation visual recognition},
author={Cai, Han and Gan, Chuang and Han, Song},
journal={arXiv preprint arXiv:2205.14756},
year={2022}
}