Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
John Lambert authored Aug 19, 2020
1 parent 5e8910e commit 1f98d1e
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -234,9 +234,9 @@ python test_universal_demo.py
All should also pass.

## Frequently Asked Questions (FAQ) (identical to FAQ on [`mseg-api` page](https://github.com/mseg-dataset/mseg-api))
**Q**: Do the weights include the model structure or it's just the weights? If the latter, which model do these weights refer to since under the models directory, since there are several model implementations?
**Q**: Do the weights include the model structure or it's just the weights? If the latter, which model do these weights refer to? Under the `models` directory, there are several model implementations.

**A**: The pre-trained models follow the HRNet-W48 architecture. The model structure is defined in the code [here](https://github.com/mseg-dataset/mseg-semantic/blob/master/mseg_semantic/model/seg_hrnet.py#L274). The saved weights provide a dictionary between keys (unique IDs for each weight identifying the corresponding layer) and values (the floating point weights).
**A**: The pre-trained models follow the HRNet-W48 architecture. The model structure is defined in the code [here](https://github.com/mseg-dataset/mseg-semantic/blob/master/mseg_semantic/model/seg_hrnet.py#L274). The saved weights provide a dictionary between keys (unique IDs for each weight identifying the corresponding layer/layer type) and values (the floating point weights).

**Q**: How is testing performed on the test datasets? In the paper you talk about "zero-shot transfer" -- how this is performed? Are the test dataset labels also mapped or included in the unified taxonomy? If you remapped the test dataset labels to the unified taxonomy, are the reported results the performances on the unified label space, or on each test dataset's original label space? How did you you obtain results on the WildDash dataset - which is evaluated by the server - when the MSeg taxonomy may be different from the WildDash dataset.

Expand Down

0 comments on commit 1f98d1e

Please sign in to comment.