diff --git a/README.md b/README.md index 0ac7a0bd2..aa8978150 100644 --- a/README.md +++ b/README.md @@ -36,7 +36,19 @@ CLIP-as-service is a low-latency high-scalability service for embedding images a ## Try it! -An always-online demo server loaded with `ViT-L/14-336px` is there for you to play & test: +An always-online server `api.clip.jina.ai` loaded with `ViT-L/14-336px` is there for you to play & test. +Before you start, make sure you have created access token from our [console website](https://console.clip.jina.ai/get_started), +or CLI as described in [this guide](https://github.com/jina-ai/jina-hubble-sdk#create-a-new-pat). + +```bash +jina auth token create -e +``` + +Then, you need to set the created token in HTTP request header `Authorization` as ``, +or configure it in the parameter `credential` of the client in python. + +⚠️ Our demo server `demo-cas.jina.ai` is sunset and no longer available after **15th of Sept 2022**. + ### Text & image embedding @@ -50,8 +62,9 @@ An always-online demo server loaded with `ViT-L/14-336px` is there for you to pl ```bash curl \ --X POST https://demo-cas.jina.ai:8443/post \ +-X POST https://api.clip.jina.ai:8443/post \ -H 'Content-Type: application/json' \ +-H 'Authorization: ' \ -d '{"data":[{"text": "First do it"}, {"text": "then do it right"}, {"text": "then do it better"}, @@ -66,7 +79,9 @@ curl \ # pip install clip-client from clip_client import Client -c = Client('grpcs://demo-cas.jina.ai:2096') +c = Client( + 'grpcs://api.clip.jina.ai:2096', credential={'Authorization': ''} +) r = c.encode( [ @@ -101,8 +116,9 @@ There are four basic visual reasoning skills: object recognition, object countin ```bash curl \ --X POST https://demo-cas.jina.ai:8443/post \ +-X POST https://api.clip.jina.ai:8443/post \ -H 'Content-Type: application/json' \ +-H 'Authorization: ' \ -d '{"data":[{"uri": "https://picsum.photos/id/1/300/300", "matches": [{"text": "there is a woman in the photo"}, {"text": "there is a man in the photo"}]}], @@ -129,8 +145,9 @@ gives: ```bash curl \ --X POST https://demo-cas.jina.ai:8443/post \ +-X POST https://api.clip.jina.ai:8443/post \ -H 'Content-Type: application/json' \ +-H 'Authorization: ' \ -d '{"data":[{"uri": "https://picsum.photos/id/133/300/300", "matches": [ {"text": "the blue car is on the left, the red car is on the right"}, @@ -165,8 +182,9 @@ gives: ```bash curl \ --X POST https://demo-cas.jina.ai:8443/post \ +-X POST https://api.clip.jina.ai:8443/post \ -H 'Content-Type: application/json' \ +-H 'Authorization: ' \ -d '{"data":[{"uri": "https://picsum.photos/id/102/300/300", "matches": [{"text": "this is a photo of one berry"}, {"text": "this is a photo of two berries"}, @@ -655,6 +673,7 @@ Fun time! Note, unlike the previous example, here the input is an image and the + ### Rank image-text matches via CLIP model From `0.3.0` CLIP-as-service adds a new `/rank` endpoint that re-ranks cross-modal matches according to their joint likelihood in CLIP model. For example, given an image Document with some predefined sentence matches as below: