Skip to content

Commit

Permalink
[Doc] Update README.md in configs according to latest standard. (op…
Browse files Browse the repository at this point in the history
…en-mmlab#1233)

* fix README.md in configs

* fix README.md in configs

* modify [ALGORITHM] to [BACKBONE] in backbone config README.md
  • Loading branch information
MengzhangLI committed Jan 25, 2022
1 parent da1fcf5 commit 80e8504
Show file tree
Hide file tree
Showing 45 changed files with 225 additions and 272 deletions.
14 changes: 4 additions & 10 deletions .dev/md2yml.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,12 +87,13 @@ def parse_md(md_file):
current_dataset = ''
while i < len(lines):
line = lines[i].strip()
# In latest README.md the title and url are in the third line.
if i == 2:
paper_url = lines[i].split('](')[1].split(')')[0]
paper_title = lines[i].split('](')[0].split('[')[1]
if len(line) == 0:
i += 1
continue
if line[:2] == '# ':
paper_title = line.replace('# ', '')
i += 1
elif line[:3] == '<a ':
content = etree.HTML(line)
node = content.xpath('//a')[0]
Expand All @@ -112,13 +113,6 @@ def parse_md(md_file):
assert repo_url is not None, (
f'{collection_name} hasn\'t official repo url.')
i += 1
elif line[:9] == '<summary ':
content = etree.HTML(line)
nodes = content.xpath('//a')
assert len(nodes) == 1, (
'summary tag should only have single a tag.')
paper_url = nodes[0].get('href', None)
i += 1
elif line[:4] == '### ':
datasets.append(line[4:])
current_dataset = line[4:]
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ A Colab tutorial is also provided. You may preview the notebook [here](demo/MMSe

If you find this project useful in your research, please consider cite:

```latex
```bibtex
@misc{mmseg2020,
title={{MMSegmentation}: OpenMMLab Semantic Segmentation Toolbox and Benchmark},
author={MMSegmentation Contributors},
Expand Down
2 changes: 1 addition & 1 deletion README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ MMSegmentation 是一个基于 PyTorch 的语义分割开源工具箱。它是 O

如果你觉得本项目对你的研究工作有所帮助,请参考如下 bibtex 引用 MMSegmentation。

```latex
```bibtex
@misc{mmseg2020,
title={{MMSegmentation}: OpenMMLab Semantic Segmentation Toolbox and Benchmark},
author={MMSegmentation Contributors},
Expand Down
11 changes: 6 additions & 5 deletions configs/ann/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# Asymmetric Non-local Neural Networks for Semantic Segmentation
# ANN

[Asymmetric Non-local Neural Networks for Semantic Segmentation](https://arxiv.org/abs/1908.07678)

## Introduction

Expand All @@ -19,10 +21,10 @@ The non-local module works as a particularly useful technique for semantic segme
<img src="https://user-images.githubusercontent.com/24582831/142898322-3bbd578c-e488-4bae-9c14-7598adac5cbd.png" width="70%"/>
</div>

<details>
<summary align="right"><a href="https://arxiv.org/abs/1908.07678">ANN (ICCV'2019)</a></summary>

```latex
## Citation

```bibtex
@inproceedings{zhu2019asymmetric,
title={Asymmetric non-local neural networks for semantic segmentation},
author={Zhu, Zhen and Xu, Mengde and Bai, Song and Huang, Tengteng and Bai, Xiang},
Expand All @@ -32,7 +34,6 @@ The non-local module works as a particularly useful technique for semantic segme
}
```

</details>

## Results and models

Expand Down
11 changes: 5 additions & 6 deletions configs/apcnet/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# Adaptive Pyramid Context Network for Semantic Segmentation
# APCNet

[Adaptive Pyramid Context Network for Semantic Segmentation](https://openaccess.thecvf.com/content_CVPR_2019/html/He_Adaptive_Pyramid_Context_Network_for_Semantic_Segmentation_CVPR_2019_paper.html)

## Introduction

Expand All @@ -19,10 +21,9 @@ Recent studies witnessed that context features can significantly improve the per
<img src="https://user-images.githubusercontent.com/24582831/142898638-e1c0c6ae-9270-448e-aa01-bbac3a236db5.png" width="70%"/>
</div>

<details>
<summary align="right"><a href="https://openaccess.thecvf.com/content_CVPR_2019/html/He_Adaptive_Pyramid_Context_Network_for_Semantic_Segmentation_CVPR_2019_paper.html">APCNet (CVPR'2019)</a></summary>
## Citation

```latex
```bibtex
@InProceedings{He_2019_CVPR,
author = {He, Junjun and Deng, Zhongying and Zhou, Lei and Wang, Yali and Qiao, Yu},
title = {Adaptive Pyramid Context Network for Semantic Segmentation},
Expand All @@ -32,8 +33,6 @@ year = {2019}
}
```

</details>

## Results and models

### Cityscapes
Expand Down
11 changes: 5 additions & 6 deletions configs/bisenetv1/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation
# BiSeNetV1

[BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation](https://arxiv.org/abs/1808.00897)

## Introduction

Expand All @@ -19,10 +21,9 @@ Semantic segmentation requires both rich spatial information and sizeable recept
<img src="https://user-images.githubusercontent.com/24582831/142898839-a0a78148-848a-41b2-8682-b1f61ac004ba.png" width="70%"/>
</div>

<details>
<summary align="right"><a href="https://arxiv.org/abs/1808.00897">BiSeNetV1 (ECCV'2018)</a></summary>
## Citation

```latex
```bibtex
@inproceedings{yu2018bisenet,
title={Bisenet: Bilateral segmentation network for real-time semantic segmentation},
author={Yu, Changqian and Wang, Jingbo and Peng, Chao and Gao, Changxin and Yu, Gang and Sang, Nong},
Expand All @@ -32,8 +33,6 @@ Semantic segmentation requires both rich spatial information and sizeable recept
}
```

</details>

## Results and models

### Cityscapes
Expand Down
10 changes: 5 additions & 5 deletions configs/bisenetv2/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# Bisenet v2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation
# BiSeNetV2

[Bisenet v2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation](https://arxiv.org/abs/2004.02147)

## Introduction

Expand All @@ -19,10 +21,9 @@ The low-level details and high-level semantics are both essential to the semanti
<img src="https://user-images.githubusercontent.com/24582831/142898966-ec4a81da-b4b0-41ee-b083-1d964582c18a.png" width="70%"/>
</div>

<details>
<summary align="right"><a href="https://arxiv.org/abs/2004.02147">BiSeNetV2 (IJCV'2021)</a></summary>
## Citation

```latex
```bibtex
@article{yu2021bisenet,
title={Bisenet v2: Bilateral network with guided aggregation for real-time semantic segmentation},
author={Yu, Changqian and Gao, Changxin and Wang, Jingbo and Yu, Gang and Shen, Chunhua and Sang, Nong},
Expand All @@ -33,7 +34,6 @@ The low-level details and high-level semantics are both essential to the semanti
}
```

</details>

## Results and models

Expand Down
10 changes: 5 additions & 5 deletions configs/ccnet/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# CCNet: Criss-Cross Attention for Semantic Segmentation
# CCNet

[CCNet: Criss-Cross Attention for Semantic Segmentation](https://arxiv.org/abs/1811.11721)

## Introduction

Expand All @@ -19,10 +21,9 @@ Contextual information is vital in visual understanding problems, such as semant
<img src="https://user-images.githubusercontent.com/24582831/142899159-b329c12a-0fde-44df-8718-def6cfb004e4.png" width="70%"/>
</div>

<details>
<summary align="right"><a href="https://arxiv.org/abs/1811.11721">CCNet (ICCV'2019)</a></summary>
## Citation

```latex
```bibtex
@article{huang2018ccnet,
title={CCNet: Criss-Cross Attention for Semantic Segmentation},
author={Huang, Zilong and Wang, Xinggang and Huang, Lichao and Huang, Chang and Wei, Yunchao and Liu, Wenyu},
Expand All @@ -31,7 +32,6 @@ Contextual information is vital in visual understanding problems, such as semant
}
```

</details>

## Results and models

Expand Down
11 changes: 5 additions & 6 deletions configs/cgnet/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# CGNet: A Light-weight Context Guided Network for Semantic Segmentation
# CGNet

[CGNet: A Light-weight Context Guided Network for Semantic Segmentation](https://arxiv.org/abs/1811.08201)

## Introduction

Expand All @@ -19,10 +21,9 @@ The demand of applying semantic segmentation model on mobile devices has been in
<img src="https://user-images.githubusercontent.com/24582831/142900351-89559574-79cc-4f57-8f69-5d88765ec38d.png" width="80%"/>
</div>

<details>
<summary align="right"><a href="https://arxiv.org/pdf/1811.08201.pdf">CGNet (TIP'2020)</a></summary>
## Citation

```latext
```bibtext
@article{wu2020cgnet,
title={Cgnet: A light-weight context guided network for semantic segmentation},
author={Wu, Tianyi and Tang, Sheng and Zhang, Rui and Cao, Juan and Zhang, Yongdong},
Expand All @@ -34,8 +35,6 @@ The demand of applying semantic segmentation model on mobile devices has been in
}
```

</details>

## Results and models

### Cityscapes
Expand Down
2 changes: 1 addition & 1 deletion configs/cgnet/cgnet.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Collections:
Training Data:
- Cityscapes
Paper:
URL: https://arxiv.org/pdf/1811.08201.pdf
URL: https://arxiv.org/abs/1811.08201
Title: 'CGNet: A Light-weight Context Guided Network for Semantic Segmentation'
README: configs/cgnet/README.md
Code:
Expand Down
11 changes: 5 additions & 6 deletions configs/danet/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# Dual Attention Network for Scene Segmentation
# DANet

[Dual Attention Network for Scene Segmentation](https://arxiv.org/abs/1809.02983)

## Introduction

Expand All @@ -19,10 +21,9 @@ In this paper, we address the scene segmentation task by capturing rich contextu
<img src="https://user-images.githubusercontent.com/24582831/142900467-f832fdb9-3b7d-47d3-8e80-e6ee9303bdfb.png" width="70%"/>
</div>

<details>
<summary align="right"><a href="https://arxiv.org/abs/1809.02983">DANet (CVPR'2019)</a></summary>
## Citation

```latex
```bibtex
@article{fu2018dual,
title={Dual Attention Network for Scene Segmentation},
author={Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang,and Hanqing Lu},
Expand All @@ -31,8 +32,6 @@ In this paper, we address the scene segmentation task by capturing rich contextu
}
```

</details>

## Results and models

### Cityscapes
Expand Down
16 changes: 6 additions & 10 deletions configs/deeplabv3/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# Rethinking atrous convolution for semantic image segmentation
# DeepLabV3

[Rethinking atrous convolution for semantic image segmentation](https://arxiv.org/abs/1706.05587)

## Introduction

Expand All @@ -19,10 +21,9 @@ In this work, we revisit atrous convolution, a powerful tool to explicitly adjus
<img src="https://user-images.githubusercontent.com/24582831/142900575-f30a7755-09aa-406a-bf78-45893a61ee9a.png" width="80%"/>
</div>

<details>
<summary align="right"><a href="https://arxiv.org/abs/1706.05587">DeepLabV3 (ArXiv'2017)</a></summary>
## Citation

```latext
```bibtext
@article{chen2017rethinking,
title={Rethinking atrous convolution for semantic image segmentation},
author={Chen, Liang-Chieh and Papandreou, George and Schroff, Florian and Adam, Hartwig},
Expand All @@ -31,14 +32,8 @@ In this work, we revisit atrous convolution, a powerful tool to explicitly adjus
}
```

</details>

## Results and models

:::{note}
`D-8` here corresponding to the output stride 8 setting for DeepLab series.
:::

### Cityscapes

| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
Expand Down Expand Up @@ -117,4 +112,5 @@ In this work, we revisit atrous convolution, a powerful tool to explicitly adjus

Note:

- `D-8` here corresponding to the output stride 8 setting for DeepLab series.
- `FP16` means Mixed Precision (FP16) is adopted in training.
18 changes: 7 additions & 11 deletions configs/deeplabv3plus/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
# DeepLabV3+

[Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1802.02611)

## Introduction

Expand All @@ -19,10 +21,9 @@ Spatial pyramid pooling module or encode-decoder structure are used in deep neur
<img src="https://user-images.githubusercontent.com/24582831/142900680-3e2c3098-8341-4760-bbfd-b1d7d29968ea.png" width="70%"/>
</div>

<details>
<summary align="right"><a href="https://arxiv.org/abs/1802.02611">DeepLabV3+ (CVPR'2018)</a></summary>
## Citation

```latex
```bibtex
@inproceedings{deeplabv3plus2018,
title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation},
author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam},
Expand All @@ -31,15 +32,8 @@ Spatial pyramid pooling module or encode-decoder structure are used in deep neur
}
```

</details>

## Results and models

:::{note}
`D-8`/`D-16` here corresponding to the output stride 8/16 setting for DeepLab series.
`MG-124` stands for multi-grid dilation in the last stage of ResNet.
:::

### Cityscapes

| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
Expand Down Expand Up @@ -122,4 +116,6 @@ Spatial pyramid pooling module or encode-decoder structure are used in deep neur

Note:

- `D-8`/`D-16` here corresponding to the output stride 8/16 setting for DeepLab series.
- `MG-124` stands for multi-grid dilation in the last stage of ResNet.
- `FP16` means Mixed Precision (FP16) is adopted in training.
11 changes: 5 additions & 6 deletions configs/dmnet/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# Dynamic Multi-scale Filters for Semantic Segmentation
# DMNet

[Dynamic Multi-scale Filters for Semantic Segmentation](https://openaccess.thecvf.com/content_ICCV_2019/papers/He_Dynamic_Multi-Scale_Filters_for_Semantic_Segmentation_ICCV_2019_paper.pdf)

## Introduction

Expand All @@ -19,10 +21,9 @@ Multi-scale representation provides an effective way toaddress scale variation o
<img src="https://user-images.githubusercontent.com/24582831/142900781-6215763f-8b71-4e0b-a6b1-c41372db2aa0.png" width="70%"/>
</div>

<details>
<summary align="right"><a href="https://openaccess.thecvf.com/content_ICCV_2019/papers/He_Dynamic_Multi-Scale_Filters_for_Semantic_Segmentation_ICCV_2019_paper.pdf">DMNet (ICCV'2019)</a></summary>
## Citation

```latex
```bibtex
@InProceedings{He_2019_ICCV,
author = {He, Junjun and Deng, Zhongying and Qiao, Yu},
title = {Dynamic Multi-Scale Filters for Semantic Segmentation},
Expand All @@ -32,8 +33,6 @@ year = {2019}
}
```

</details>

## Results and models

### Cityscapes
Expand Down
Loading

0 comments on commit 80e8504

Please sign in to comment.