From a03e57efee4b0190d82cae445b79f2b0f9bece67 Mon Sep 17 00:00:00 2001 From: YuanLiuuuuuu <3463423099@qq.com> Date: Wed, 27 Apr 2022 21:36:42 +0800 Subject: [PATCH] [Feature]: Add download link for dalle --- configs/selfsup/cae/RAEDME.md | 4 ++++ docs/en/algorithms/cae.md | 4 ++++ docs/zh_cn/algorithms/cae.md | 4 ++++ 3 files changed, 12 insertions(+) diff --git a/configs/selfsup/cae/RAEDME.md b/configs/selfsup/cae/RAEDME.md index 205855ba7..a79750ece 100644 --- a/configs/selfsup/cae/RAEDME.md +++ b/configs/selfsup/cae/RAEDME.md @@ -13,6 +13,10 @@ We present a novel masked image modeling (MIM) approach, context autoencoder (CA +## Prerequisite + +Create a new folder ``cae_ckpt`` under the root directory and download the +[weights](https://download.openmmlab.com/mmselfsup/cae/dalle_encoder.pth) for ``dalle`` encoder to that folder ## Models and Benchmarks Here, we report the results of the model, which is pre-trained on ImageNet-1k diff --git a/docs/en/algorithms/cae.md b/docs/en/algorithms/cae.md index 205855ba7..a79750ece 100644 --- a/docs/en/algorithms/cae.md +++ b/docs/en/algorithms/cae.md @@ -13,6 +13,10 @@ We present a novel masked image modeling (MIM) approach, context autoencoder (CA +## Prerequisite + +Create a new folder ``cae_ckpt`` under the root directory and download the +[weights](https://download.openmmlab.com/mmselfsup/cae/dalle_encoder.pth) for ``dalle`` encoder to that folder ## Models and Benchmarks Here, we report the results of the model, which is pre-trained on ImageNet-1k diff --git a/docs/zh_cn/algorithms/cae.md b/docs/zh_cn/algorithms/cae.md index 205855ba7..a79750ece 100644 --- a/docs/zh_cn/algorithms/cae.md +++ b/docs/zh_cn/algorithms/cae.md @@ -13,6 +13,10 @@ We present a novel masked image modeling (MIM) approach, context autoencoder (CA +## Prerequisite + +Create a new folder ``cae_ckpt`` under the root directory and download the +[weights](https://download.openmmlab.com/mmselfsup/cae/dalle_encoder.pth) for ``dalle`` encoder to that folder ## Models and Benchmarks Here, we report the results of the model, which is pre-trained on ImageNet-1k