Proxy server for triton gRPC server that inferences embedding model in Rust
-
Updated
Aug 10, 2024 - Rust
Proxy server for triton gRPC server that inferences embedding model in Rust
This repository utilizes the Triton Inference Server Client, which streamlines the complexity of model deployment.
This repository demonstrates instance segmentation using YOLOv8 (smart) model on Triton Inference Server
A Node.js client for the Triton Inference Server.
Add a description, image, and links to the triton-client topic page so that developers can more easily learn about it.
To associate your repository with the triton-client topic, visit your repo's landing page and select "manage topics."