Skip to content

Commit

Permalink
onnxruntime support gpu (#10668)
Browse files Browse the repository at this point in the history
* Update ch_PP-OCRv3_rec.yml

* Update ch_PP-OCRv3_rec_distillation.yml

* Update en_PP-OCRv3_rec.yml

* Update arabic_PP-OCRv3_rec.yml

* Update chinese_cht_PP-OCRv3_rec.yml

* Update cyrillic_PP-OCRv3_rec.yml

* Update devanagari_PP-OCRv3_rec.yml

* Update japan_PP-OCRv3_rec.yml

* Update ka_PP-OCRv3_rec.yml

* Update korean_PP-OCRv3_rec.yml

* Update latin_PP-OCRv3_rec.yml

* Update ta_PP-OCRv3_rec.yml

* Update te_PP-OCRv3_rec.yml

* Update utility.py
  • Loading branch information
WenmuZhou authored Aug 17, 2023
1 parent 8f010ec commit dbf35bb
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion tools/infer/utility.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,10 @@ def create_predictor(args, mode, logger):
if not os.path.exists(model_file_path):
raise ValueError("not find model file path {}".format(
model_file_path))
sess = ort.InferenceSession(model_file_path)
if args.use_gpu:
sess = ort.InferenceSession(model_file_path, providers=['CUDAExecutionProvider'])
else:
sess = ort.InferenceSession(model_file_path)
return sess, sess.get_inputs()[0], None, None

else:
Expand Down

0 comments on commit dbf35bb

Please sign in to comment.