Skip to content

Latest commit

 

History

History
49 lines (33 loc) · 2.02 KB

README.md

File metadata and controls

49 lines (33 loc) · 2.02 KB

Auto Tagger with TensorRT implementation

This is meant to work with similing wolfs trained booru taggers

Inferences can be made with the models in tensorflow, but inference can be much faster using TensorRT if you have an nvidia GPU.

explantion of how to convert these models are here https://github.com/bdiaz29/ConvertTagger2TensorRT however it is very important that the enviroment for conversation and inference be seperate, since newer versions of tensorflow do not have gpu support for windows past 2.10.

explantion of the parameters of autotagger.oy

"--image_dir" :the directory of the images to apply captions to
"--include_characters" :whether to include tags involving characters or not
'--tag_threshold': the threshold of wether to pass a tag or not
'--model_path' : the directory for the tagger model, will be a folder for WD tensorflow models and a file for TensorRT models
"--exclude_tags" : the tags you dont want to be applied to the captions even if they are above threshold
"--append_tags" : the tags you want to be applied to the front of the captions.
"--use_tensorrt" : set this flag if you intent to use tensorRT

There is also a gui for ease of use
image

Installation

git clone https://github.com/bdiaz29/autotagger
pip install -r requirements.txt