Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Replace markdownlint with mdformat for avoiding installing ruby #130

Merged
merged 1 commit into from
May 18, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 15 additions & 15 deletions .github/CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,22 +14,22 @@ appearance, race, religion, or sexual identity and orientation.
Examples of behavior that contributes to creating a positive environment
include:

* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
- Using welcoming and inclusive language
- Being respectful of differing viewpoints and experiences
- Gracefully accepting constructive criticism
- Focusing on what is best for the community
- Showing empathy towards other community members

Examples of unacceptable behavior by participants include:

* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
- The use of sexualized language or imagery and unwelcome sexual attention or
advances
- Trolling, insulting/derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or electronic
address, without explicit permission
- Other conduct which could reasonably be considered inappropriate in a
professional setting

## Our Responsibilities

Expand Down Expand Up @@ -70,7 +70,7 @@ members of the project's leadership.
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at <https://www.contributor-covenant.org/version/1/4/code-of-conduct.html>

[homepage]: https://www.contributor-covenant.org

For answers to common questions about this code of conduct, see
<https://www.contributor-covenant.org/faq>

[homepage]: https://www.contributor-covenant.org
16 changes: 10 additions & 6 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
repos:
- repo: https://gitlab.com/pycqa/flake8.git
- repo: https://github.com/PyCQA/flake8
rev: 3.8.3
hooks:
- id: flake8
Expand All @@ -24,15 +24,19 @@ repos:
args: ["--remove"]
- id: mixed-line-ending
args: ["--fix=lf"]
- repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0
hooks:
- id: markdownlint
args: ["-r", "~MD002,~MD013,~MD024,~MD029,~MD033,~MD034,~MD036", "-t", "allow_different_nesting"]
- repo: https://github.com/codespell-project/codespell
rev: v2.1.0
hooks:
- id: codespell
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.14
hooks:
- id: mdformat
args: ["--number"]
additional_dependencies:
- mdformat-gfm
- mdformat_frontmatter
- linkify-it-py
- repo: https://github.com/myint/docformatter
rev: v1.3.1
hooks:
Expand Down
2 changes: 1 addition & 1 deletion README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ MMFlow 也提供了其他更详细的教程,包括:

## 欢迎加入 OpenMMLab 社区

扫描下方的二维码可关注 OpenMMLab 团队的 [知乎官方账号](https://www.zhihu.com/people/openmmlab),加入 OpenMMLab 团队的 [官方交流 QQ 群](https://jq.qq.com/?_wv=1027&k=aCvMxdr3)
扫描下方的二维码可关注 OpenMMLab 团队的 [知乎官方账号](https://www.zhihu.com/people/openmmlab),加入 OpenMMLab 团队的 [官方交流 QQ 群](https://jq.qq.com/?_wv=1027&k=aCvMxdr3)

<div align="center">
<img src="resources/zhihu_qrcode.jpg" height="400" /> <img src="resources/qq_group_qrcode.jpg" height="400" />
Expand Down
10 changes: 5 additions & 5 deletions configs/gma/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -218,10 +218,10 @@ unpublished approaches. Code is available at https://github.com/zacjiang/GMA.
## Citation

@article{jiang2021learning,
title={Learning to Estimate Hidden Motions with Global Motion Aggregation},
author={Jiang, Shihao and Campbell, Dylan and Lu, Yao and Li, Hongdong and Hartley, Richard},
journal={arXiv preprint arXiv:2104.02409},
year={2021}
title={Learning to Estimate Hidden Motions with Global Motion Aggregation},
author={Jiang, Shihao and Campbell, Dylan and Lu, Yao and Li, Hongdong and Hartley, Richard},
journal={arXiv preprint arXiv:2104.02409},
year={2021}
}

[1] The mixed dataset consisted of FlyingChairs, FlyingThing3d, Sintel, KITTI2015, and HD1K.
\[1\] The mixed dataset consisted of FlyingChairs, FlyingThing3d, Sintel, KITTI2015, and HD1K.
2 changes: 1 addition & 1 deletion configs/raft/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,4 +121,4 @@ is available at https://github.com/princeton-vl/RAFT.
}
```

[1] The mixed dataset consisted of FlyingChairs, FlyingThing3d, Sintel, KITTI2015, and HD1K.
\[1\] The mixed dataset consisted of FlyingChairs, FlyingThing3d, Sintel, KITTI2015, and HD1K.
6 changes: 3 additions & 3 deletions docs/en/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,12 @@

### Documents

- Add zh-cn doc 0_config_.md ([#126](https://github.com/open-mmlab/mmflow/pull/126))
- Add zh-cn doc 0_config\_.md ([#126](https://github.com/open-mmlab/mmflow/pull/126))

## New Contributors

* @HiiiXinyiii made their first contribution in https://github.com/open-mmlab/mmflow/pull/118
* @SheffieldCao made their first contribution in https://github.com/open-mmlab/mmflow/pull/126
- @HiiiXinyiii made their first contribution in https://github.com/open-mmlab/mmflow/pull/118
- @SheffieldCao made their first contribution in https://github.com/open-mmlab/mmflow/pull/126

## v0.4.0(04/01/2022)

Expand Down
2 changes: 1 addition & 1 deletion docs/en/data_prepare/FlyingThings3d/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,4 +74,4 @@
| | | | | | | | ├── OpticalFlowIntoPast_xxxx_R.pfm
```

You can download datasets via [BitTorrent] (https://lmb.informatik.uni-freiburg.de/data/SceneFlowDatasets_CVPR16/Release_april16/data/FlyingThings3D/raw_data/flyingthings3d__frames_cleanpass.tar.torrent). Then, you need to unzip and move corresponding datasets to follow the folder structure shown above. The datasets have been well-prepared by the original authors.
You can download datasets via \[BitTorrent\] (https://lmb.informatik.uni-freiburg.de/data/SceneFlowDatasets_CVPR16/Release_april16/data/FlyingThings3D/raw_data/flyingthings3d__frames_cleanpass.tar.torrent). Then, you need to unzip and move corresponding datasets to follow the folder structure shown above. The datasets have been well-prepared by the original authors.
2 changes: 1 addition & 1 deletion docs/en/data_prepare/FlyingThings3d_subset/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,4 +73,4 @@
| | | | | ├── xxxxxxx.png
```

You can download datasets via [BitTorrent] (https://lmb.informatik.uni-freiburg.de/data/FlyingThings3D_subset/FlyingThings3D_subset_image_clean.tar.bz2.torrent). Then, you need to unzip and move corresponding datasets to follow the folder structure shown above. The datasets have been well-prepared by the original authors.
You can download datasets via \[BitTorrent\] (https://lmb.informatik.uni-freiburg.de/data/FlyingThings3D_subset/FlyingThings3D_subset_image_clean.tar.bz2.torrent). Then, you need to unzip and move corresponding datasets to follow the folder structure shown above. The datasets have been well-prepared by the original authors.
71 changes: 37 additions & 34 deletions docs/en/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,15 @@ This page provides basic tutorials about the usage of MMFlow.
For installation instructions, please see [install.md](install.md).

<!-- TOC -->

- [Getting Started](#getting-started)
- [Prepare datasets](#prepare-datasets)
- [Inference with Pre-trained Models](#inference-with-pre-trained-models)
- [Run a demo](#run-a-demo)
- [Test a dataset](#test-a-dataset)
- [Train a model](#train-a-model)
- [Tutorials](#tutorials)

<!-- TOC -->

## Prepare datasets
Expand Down Expand Up @@ -39,50 +41,51 @@ We provide scripts to run demos. Here is an example to predict the optical flow

1. [image demo](../demo/image_demo.py)

```shell
python demo/image_demo.py ${IMAGE1} ${IMAGE2} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${OUTPUT_DIR} \
[--out_prefix] ${OUTPUT_PREFIX} [--device] ${DEVICE}
```
```shell
python demo/image_demo.py ${IMAGE1} ${IMAGE2} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${OUTPUT_DIR} \
[--out_prefix] ${OUTPUT_PREFIX} [--device] ${DEVICE}
```

Optional arguments:
Optional arguments:

- `--out_prefix`: The prefix for the output results including flow file and visualized flow map.
- `--device`: Device used for inference.
- `--out_prefix`: The prefix for the output results including flow file and visualized flow map.
- `--device`: Device used for inference.

Example:
Example:

Assume that you have already downloaded the checkpoints to the directory `checkpoints/`,
and output will be saved in the directory `raft_demo`.
Assume that you have already downloaded the checkpoints to the directory `checkpoints/`,
and output will be saved in the directory `raft_demo`.

```shell
python demo/image_demo.py demo/frame_0001.png demo/frame_0002.png \
configs/raft/raft_8x2_100k_mixed_368x768.py \
checkpoints/raft_8x2_100k_mixed_368x768.pth raft_demo
```
```shell
python demo/image_demo.py demo/frame_0001.png demo/frame_0002.png \
configs/raft/raft_8x2_100k_mixed_368x768.py \
checkpoints/raft_8x2_100k_mixed_368x768.pth raft_demo
```

2. [video demo](../demo/video_demo.py)

```shell
python demo/video_demo.py ${VIDEO} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${OUTPUT_FILE} \
[--gt] ${GROUND_TRUTH} [--device] ${DEVICE}
```
```shell
python demo/video_demo.py ${VIDEO} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${OUTPUT_FILE} \
[--gt] ${GROUND_TRUTH} [--device] ${DEVICE}
```

Optional arguments:

Optional arguments:
- `--gt`: The video file of ground truth for input video.
If specified, the ground truth will be concatenated predicted result as a comparison.
- `--device`: Device used for inference.
- `--gt`: The video file of ground truth for input video.
If specified, the ground truth will be concatenated predicted result as a comparison.
- `--device`: Device used for inference.

Example:
Example:

Assume that you have already downloaded the checkpoints to the directory `checkpoints/`,
and output will be save as `raft_demo.mp4`.
Assume that you have already downloaded the checkpoints to the directory `checkpoints/`,
and output will be save as `raft_demo.mp4`.

```shell
python demo/video_demo.py demo/demo.mp4 \
configs/raft/raft_8x2_100k_mixed_368x768.py \
checkpoints/raft_8x2_100k_mixed_368x768.pth \
raft_demo.mp4 --gt demo/demo_gt.mp4
```
```shell
python demo/video_demo.py demo/demo.mp4 \
configs/raft/raft_8x2_100k_mixed_368x768.py \
checkpoints/raft_8x2_100k_mixed_368x768.pth \
raft_demo.mp4 --gt demo/demo_gt.mp4
```

### Test a dataset

Expand All @@ -100,7 +103,7 @@ Optional arguments:
- `--show_dir`: Directory to save the visualized flow maps. If not specified, the flow maps will not be saved.
- `--eval`: Evaluation metrics, e.g., "EPE".
- `--cfg-option`: Override some settings in the used config, the key-value pair in xxx=yyy format will be merged into config file.
For example, '--cfg-option model.encoder.in_channels=6'.
For example, '--cfg-option model.encoder.in_channels=6'.

Examples:

Expand Down Expand Up @@ -131,7 +134,7 @@ Optional arguments:
- `--seed`: Seed id for random state in python, numpy and pytorch to generate random numbers.
- `--deterministic`: If specified, it will set deterministic options for CUDNN backend.
- `--cfg-options`: Override some settings in the used config, the key-value pair in xxx=yyy format will be merged into config file.
For example, '--cfg-option model.encoder.in_channels=6'.
For example, '--cfg-option model.encoder.in_channels=6'.

Difference between `resume-from` and `load-from`:
`resume-from` loads both the model weights and optimizer status, and the epoch/iter is also inherited from the specified checkpoint. It is usually used for resuming the training process that is interrupted accidentally.
Expand Down
4 changes: 2 additions & 2 deletions docs/en/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,10 +57,10 @@ c. Install MMCV, we recommend you to install the pre-built mmcv as below.
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
```

Please replace ``{cu_version}`` and ``{torch_version}`` in the url to your desired one. mmcv-full is only compiled on
Please replace `{cu_version}` and `{torch_version}` in the url to your desired one. mmcv-full is only compiled on
PyTorch 1.x.0 because the compatibility usually holds between 1.x.0 and 1.x.1. If your PyTorch version is 1.x.1,
you can install mmcv-full compiled with PyTorch 1.x.0 and it usually works well.
For example, to install the latest ``mmcv-full`` with ``CUDA 10.2`` and ``PyTorch 1.10.0``, use the following command:
For example, to install the latest `mmcv-full` with `CUDA 10.2` and `PyTorch 1.10.0`, use the following command:

```shell
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.10/index.html
Expand Down
46 changes: 23 additions & 23 deletions docs/en/intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@ MMFlow, and provides links to detailed tutorials about MMFlow.

## What is Optical flow estimation

Optical flow is a 2D velocity field, representing the **apparent 2D image motion** of pixels from the reference image to the target image [1].
Optical flow is a 2D velocity field, representing the **apparent 2D image motion** of pixels from the reference image to the target image \[1\].
The task can be defined as follows: Given two images img<sub>1</sub> ,img<sub>2</sub> ∈ R<sup>HxWx3</sup>,
the flow field U ∈ R<sup>HxWx2</sup> describes the horizontal and vertical image motion between img<sub>1</sub> and img<sub>2</sub> [2].
Here is an example for visualized flow map from [Sintel dataset](http://sintel.is.tue.mpg.de/) [3-4]. The character in origin images moves left,
the flow field U ∈ R<sup>HxWx2</sup> describes the horizontal and vertical image motion between img<sub>1</sub> and img<sub>2</sub> \[2\].
Here is an example for visualized flow map from [Sintel dataset](http://sintel.is.tue.mpg.de/) \[3-4\]. The character in origin images moves left,
so the motion raises the optical flow, and referring to the color wheel whose color represents the direction on the right, the left flow can be rendered
as blue.

Expand All @@ -19,19 +19,19 @@ as blue.
Note that optical flow only focuses on images, and is not relative to the projection of the 3D motion of points
in the scene onto the image plane.

>One may ask, "What about the motion of a smooth surface like a smooth rotating sphere?"
> One may ask, "What about the motion of a smooth surface like a smooth rotating sphere?"

If the surface of the sphere is untextured then there will be no apparent motion on the image plane and hence no optical flow [2].
It illustrates that the motion field [5], corresponding to the motion of points in the scene,
If the surface of the sphere is untextured then there will be no apparent motion on the image plane and hence no optical flow \[2\].
It illustrates that the motion field \[5\], corresponding to the motion of points in the scene,
is not always the same as the optical flow field. However, for most applications of optical flow,
it is the motion field that is required and, typically, the world has enough structure so that optical flow
provides a good approximation to the motion field [2]. As long as the optical flow field provides a reasonable approximation,
provides a good approximation to the motion field \[2\]. As long as the optical flow field provides a reasonable approximation,
it can be considered as a strong hint of sequential frames and is used in a variety of situations, e.g., action recognition,
autonomous driving, and video editing [6].
autonomous driving, and video editing \[6\].

The metrics to compare the performance of the optical flow methods are *EPE*, EndPoint Error over the complete frames,
and *Fl-all*, percentage of outliers averaged over all pixels, that inliers are defined as EPE < 3 pixels or < 5%.
The mainstream benchmark datasets are Sintel for dense optical flow and KITTI [7-9] for sparse optical flow.
and *Fl-all*, percentage of outliers averaged over all pixels, that inliers are defined as EPE \< 3 pixels or \< 5%.
The mainstream benchmark datasets are Sintel for dense optical flow and KITTI \[7-9\] for sparse optical flow.

## What is MMFlow

Expand All @@ -45,13 +45,13 @@ and below is its whole framework:
MMFlow consists of 4 main parts, `datasets`, `models`, `core` and `apis`.

- `datasets` is for datasets loading and data augmentation. In this part,
we support various datasets for supervised optical flow algorithms,
useful data augmentation transforms in `pipelines` for pre-processing image pairs
and flow data (including its auxiliary data), and samplers for data loading in `samplers`.
we support various datasets for supervised optical flow algorithms,
useful data augmentation transforms in `pipelines` for pre-processing image pairs
and flow data (including its auxiliary data), and samplers for data loading in `samplers`.

- `models` is the most vital part containing models of learning-based optical flow.
As you can see, we implement each model as a flow estimator and decompose it into two components encoder and decoder.
The loss functions for flow models training are in this module as well.
As you can see, we implement each model as a flow estimator and decompose it into two components encoder and decoder.
The loss functions for flow models training are in this module as well.

- `core` provides evaluation tools and customized hooks for model training.

Expand Down Expand Up @@ -82,11 +82,11 @@ Here is a detailed step-by-step guide to learn more about MMFlow:
## References

1. Michael Black, Optical flow: The "good parts" version, Machine Learning Summer School (MLSS), Tübiungen, 2013.
2. Black M J. Robust incremental optical flow[D]. Yale University, 1992.
3. Butler D J, Wulff J, Stanley G B, et al. A naturalistic open source movie for optical flow evaluation[C]//European conference on computer vision. Springer, Berlin, Heidelberg, 2012: 611-625.
4. Wulff J, Butler D J, Stanley G B, et al. Lessons and insights from creating a synthetic optical flow benchmark[C]//European Conference on Computer Vision. Springer, Berlin, Heidelberg, 2012: 168-177.
5. Horn B, Klaus B, Horn P. Robot vision[M]. MIT Press, 1986.
6. Sun D, Yang X, Liu M Y, et al. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 8934-8943.
7. Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? the kitti vision benchmark suite[C]//2012 IEEE conference on computer vision and pattern recognition. IEEE, 2012: 3354-3361.
8. Menze M, Heipke C, Geiger A. Object scene flow[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2018, 140: 60-76.
9. Menze M, Heipke C, Geiger A. Joint 3d estimation of vehicles and scene flow[J]. ISPRS annals of the photogrammetry, remote sensing and spatial information sciences, 2015, 2: 427.
2. Black M J. Robust incremental optical flow\[D\]. Yale University, 1992.
3. Butler D J, Wulff J, Stanley G B, et al. A naturalistic open source movie for optical flow evaluation\[C\]//European conference on computer vision. Springer, Berlin, Heidelberg, 2012: 611-625.
4. Wulff J, Butler D J, Stanley G B, et al. Lessons and insights from creating a synthetic optical flow benchmark\[C\]//European Conference on Computer Vision. Springer, Berlin, Heidelberg, 2012: 168-177.
5. Horn B, Klaus B, Horn P. Robot vision\[M\]. MIT Press, 1986.
6. Sun D, Yang X, Liu M Y, et al. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume\[C\]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 8934-8943.
7. Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? the kitti vision benchmark suite\[C\]//2012 IEEE conference on computer vision and pattern recognition. IEEE, 2012: 3354-3361.
8. Menze M, Heipke C, Geiger A. Object scene flow\[J\]. ISPRS Journal of Photogrammetry and Remote Sensing, 2018, 140: 60-76.
9. Menze M, Heipke C, Geiger A. Joint 3d estimation of vehicles and scene flow\[J\]. ISPRS annals of the photogrammetry, remote sensing and spatial information sciences, 2015, 2: 427.
Loading