This repo provides a basic template for using GPT-JT on Bananas serverless GPU platform. Ready to be used for 1-Click deploy.
Includes test cases for Text summarization(#1), Question Answering(#2), and Sentiment Analysis(#3). For more capabilities see the HuggingFace GPT-JT demo)
The repo is already set up to run a basic HuggingFace GPT-JT model.
- Run
pip3 install -r requirements.txt
to download dependencies. - Run
python3 server.py
to start the server. - Run
python3 test.py
in a different terminal session to test against it.
- Edit
app.py
to load and run your model. - Make sure to test with
test.py
!
if deploying using Docker:
- Edit
download.py
(or theDockerfile
itself) with scripts download your custom model weights at build time.
At this point, you have a functioning http server for your ML model. You can use it as is, or package it up with our provided Dockerfile
and deploy it to your favorite container hosting provider!
If Banana is your favorite GPU hosting provider (and we sure hope it is), read on!
- Log in to the Banana App
- Select your customized repo for deploy!
It'll then be built from the dockerfile, optimized, then deployed on our Serverless GPU cluster and callable with any of our SDKs:
You can monitor buildtime and runtime logs by clicking the logs button in the model view on the Banana Dashboard](https://app.banana.dev)