👨🏾🍳 Blogpost - Building Multimodal AI in TypeScript
First, clone the project with the command below
git clone https://github.com/weaviate-tutorials/next-multimodal-search-demo
The repository lets you do three things
- Run the Next.js Web App.
- Run an instance of Weaviate OR create a Weaviate Sandbox
- Import images, audio and videos into your Weaviate database.
Note that the first time you run it, Docker will download ~4.8GB multi2vec-bind Weaviate module, which contains the ImageBind model.
To start the Weaviate instance, run the following command, which will use the docker-compose.yml
file.
docker compose up -d
Create a Weaviate instance on Weaviate Cloud Services as described in this guide
- your Google Vertex API key as
GOOGLE_API_KEY
(you can get this in your Vertex AI settings) - your Weaviate API key as
WEAVIATE_ADMIN_KEY
(you can get this in your Weaviate dashboard under sandbox details) - your Weaviate host URL as
WEAVIATE_HOST_URL
(you can get this in your Weaviate dashboard under sandbox details)
Before you can import data, add any files to their respective media type in the
public/
folder.
With your data in the right folder, run yarn install
to install all project dependencies and to import your data into Weaviate and initialize a collection, run:
yarn run import
this may take a minute or two.
Make sure you have your Weaviate instance running with data imported before starting your Next.js Web App.
To run the Web App
yarn dev
... and you can search away!!
Learn more about multimodal applications
- Check out the Weaviate Docs
- Open an Issue
Some credit goes to Steven for his Spirals template