- Language-agnostic PyTorch model serving
- Serve JIT compiled PyTorch model in production environment
This project is an extension for Brusta: original project with Scala/Java support
- docker == 18.09.1
- go >= 1.13
- your JIT traced PyTorch model (If you are not familiar with JIT tracing, please refer JIT Tutorial)
- Run "make" to make your PyTorch model server binary (libtorch should be pre-installed)
- Load your traced PyTorch model file on the "model server"
- Run the model server
- TBD
- TBD
Request to the model server as follow (Suppose your input dimension is 3)
curl -X POST -d '{"input":[1.0, 1.0, 1.0]}' localhost:8080/predict
- YongRae Jo (dreamgonfly@gmail.com)
- YoonHo Jo (cloudjo21@gmail.com)
- GiChang Lee (new.ratsgo@gmail.com)
- SukHyun Ko (s3cr3t)
- Seunghwan Hong (harrydrippin@gmail.com)
- Alex Kim (hyoungseok.k@gmail.com, Original project)