To install TensorRT you need to first install Cuda, and then follow Nvidia's guide to installing CuDNN
then make sure you have python 3.10 enviroment set up and download tensorRT
once you download tensorrt you will see it from the download folder two folders bin and python these are going to be important bin\trtexec.exe is what you will use to construct trt engines and the python folder is what you will use to install tensorrt into your enviroment, find the wheel that corresponds to python 3.10 and copy that directory,(in windows you can do this by holding shift right cliking and selecting copy as path )
then use that to install tensorrt in your enviroment with pip install [filepath to wheel]
and tensorrt should be installed.
model conversion for model conversrion you need to keep note of where you installed tensorRT and point the toml file in this repo to its location
use the python script convert model to convert the WD tagger the script will convert it first to onnx and then convert that to to a tensorRT engine
first locate the directory of the tagger files which should look like this
the folder directory for that will be put into the --model_path assign a batch_size and name and run the script. in the terminal you will see a message when it is finishe converting, this can take a while.