Skip to content

This is a vit-base-patch16-224 model starter template from Banana.dev for image-to-text captioning that allows on-demand serverless GPU inference.

Notifications You must be signed in to change notification settings

bananaml/demo-vit-base-patch16-224

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Banana.dev vit-base-patch16-224 starter template

This is a vit-base-patch16-224 starter template from Banana.dev that allows on-demand serverless GPU inference.

You can fork this repository and deploy it on Banana as is, or customize it based on your own needs.

Running this app

Deploying on Banana.dev

  1. Fork this repository to your own Github account.
  2. Connect your Github account on Banana.
  3. Create a new model on Banana from the forked Github repository.

Running after deploying

  1. Wait for the model to build after creating it.
  2. Make an API request using one of the provided snippets in your Banana dashboard. However, instead of sending a prompt as provided in the snippet, send your image url as follows:
inputs = {
    "image": "your_image_url"
}

For more info, check out the Banana.dev docs.

About

This is a vit-base-patch16-224 model starter template from Banana.dev for image-to-text captioning that allows on-demand serverless GPU inference.

Resources

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 81.1%
  • Dockerfile 18.9%