-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
330 lines (250 loc) · 18.5 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
---
layout: default
---
<section style="text-align: justify">
<h1> Visual Domain Adaptation Challenge </h1>
<h2> (VisDA-2018) </h2>
</br>
[<a href = "#news">News</a>]
[<a href = "#overview">Overview</a>]
[<a href = "#data">Data</a>]
[<a href= "#prizes">Prizes</a>]
[<a href = "#evaluation">Evaluation</a>]
[<a href = "#rules">Rules</a>]
[<a href = "#faq">FAQ</a>]
[<a href = "#workshop">TASK-CV Workshop</a>]
[<a href = "#organizers">Organizers</a>]
<!--[<a href = "#sponsors">Sponsors</a>]-->
</br> </br> </nr>
<a name = "news"></a>
<h2 class="section-title"> News </h2>
<p>Introducing the 2018 VisDA Challenge! Stay tuned for more dates and details coming soon. This year we are using a new shiny <a href="http://ai.bu.edu/syn2real/index.html">Syn2Real</a> dataset collected by our team.</p>
<p>For details about last year's challenge and winners, see the <a href = "http://ai.bu.edu/visda-2017/">VisDA 2017</a> challenge page.</p>
<p style="color:red"> Click <a href="https://ai.bu.edu/visda-2022/">[here]</a> to learn about our VisDA 2022 Challenge.</p>
<ul>
<li><b>Sep 24</b> TASK-CV workshop presentations available.</li>
<li><b>Sep 14</b> Winners announcement!</li>
<li><b>August 1</b> Testing data released</li>
<li><b>May 21</b> Evaluation servers are open for submission</li>
<li><b>May 16</b> Training and validation data released</li>
<li><b>April 9</b> Registration starts</li>
</ul>
</br>
<a name = "winners"></a>
<h2 class="section-title"> Winners of VisDA-2018 challenge </h2>
<div class="summary">
<b style="font-size: 20px;"> Openset Classification </b> <a href="https://competitions.codalab.org/competitions/19113#results"> [full leaderboard]</a>
</br> </br>
<table style="width:100%" class="table">
<tr class="active">
<th width='3%'>#</th>
<th width='25%'>Team Name</th>
<th>Affiliation</th>
<th>Score</th>
</tr>
<tr>
<td> 1 </td>
<td>VARMS</td>
<td>JD AI Research, CV Lab </td>
<td>92.3 [<a href="{{site.baseurl}}/assets/attachments/Visda18-classification-JD-AI.pdf">slides</a>]</td>
</tr>
<tr>
<td> 2 </td>
<td>Diggers</td>
<td>University of Electronic Science and Technology of China </td>
<td>69.0 [<a href="{{site.baseurl}}/assets/attachments/openset_runnerup_2.pdf">slides</a>]</td>
</tr>
<tr>
<td> 3 </td>
<td>THUML</td>
<td>Tsinghua University</td>
<td>68.3 [<a href="{{site.baseurl}}/assets/attachments/openset_honorable_3.pdf">slides</a>]</td>
</tr>
</table>
</br>
<b style="font-size: 20px;"> Detection </b> <a href="https://competitions.codalab.org/competitions/18892#results"> [full leaderboard]</a>
</br> </br>
<table style="width:100%" class="table">
<tr class="active">
<th width='3%'>#</th>
<th width='25%'>Team Name</th>
<th>Affiliation</th>
<th>Score</th>
</tr>
<tr>
<td> 1 </td>
<td>VARMS</td>
<td>JD AI Research, CV Lab</td>
<td>48.6 [<a href="{{site.baseurl}}/assets/attachments/Visda18-detection-JD-AI.pdf">slides</a>]</td>
</tr>
<tr>
<td> 2 </td>
<td> GF_ColourLab_UEA</td>
<td>University of East Anglia, Colour Lab</td>
<td>13.5 [<a href="{{site.baseurl}}/assets/attachments/detection_runnerup_2.pdf">slides</a>]</td>
</tr>
<tr>
<td> 3 </td>
<td> UQ_SAS </td>
<td>University of Queensland, Australia</td>
<td>12.1 [<a href="{{site.baseurl}}/assets/attachments/detection_honorable_3.pdf">slides</a>]</td>
</tr>
</table>
</br>
<a name = "overview"></a>
<h2 class="section-title"> Overview </h2>
<div class="summary">
<p> We are pleased to announce the 2018 Visual Domain Adaptation (VisDA2018) Challenge! It is well known that the success of machine learning methods on visual recognition tasks is highly dependent on access to large labeled datasets. Unfortunately, performance often drops significantly when the model is presented with data from a new deployment domain which it did not see in training, a problem known as <i>dataset shift</i>. The VisDA challenge aims to test domain adaptation methods’ ability to transfer source knowledge and adapt it to novel target domains. </p>
</br>
<figure>
<img class="img-responsive" style="dipslay:block;margin-left:auto;margin-right:auto;height:60%" src="{{site.baseurl}}/assets/images/domain-adaptation.png"/>
</figure>
<p><strong>Caption:</strong> An example of a domain adaptation problem for object classification with a synthetic source (train) domain and a real target (test) domain. Unsupervised Domain Adaptation methods aim to use labeled samples from the train domain and large volumes of unlabeled samples from the test domain to reduce prediction errors on the test domain. </p>
</br>
<p> The competition will take place during the months of May -- September 2018, and the top performing teams will be invited to present their results at the <a href="https://sites.google.com/view/task-cv2018/home">TASK-CV</a> workshop at <a href="https://eccv2018.org/">ECCV 2018</a> in Munich, Germany. This year’s challenge focuses on synthetic-to-real visual domain shifts and includes two tracks: </p>
<ul>
<li><a href = "#detection">object detection</a></li>
<li><a href = "#classification">open-set image classification</a></li>
</ul>
<p> across two different domain shift problems. Participants are welcome to enter in one or both tracks. </p>
</br>
<a name = "classification"></a>
<h3 class="section-title"> Open-Set Classification Track </h2>
<p>Last year’s challenge featured a closed-set classification task for synthetic-to-real adaptation, where all object categories were known ahead of time. The top performing teams in this track developed CNN models that achieved impressive adaptation <a href = "http://ai.bu.edu/visda-2017/#winners">results</a>. This year, we push the boundaries beyond closed-set classification and propose a novel open-set classification task. In this track, the goal is to develop a method of unsupervised domain adaptation for object classification, where the target domains contain images of additional unknown categories not present in the source dataset.</p>
</br>
<figure>
<img class="img-responsive" style="dipslay:block;margin-left:auto;margin-right:auto;height:60%" src="{{site.baseurl}}/assets/images/openset.png"/>
</figure>
</br></br>
<a name = "detection"></a>
<h3 class="section-title"> Detection Track </h2>
<p>In this track, the goal is to develop a model that can adapt between synthetic and real objects for recognition/detection. This task entails localizing an object from each of 12 learned categories in novel images by predicting its class and its bounding box. </p>
</br>
<figure>
<img class="img-responsive" style="dipslay:block;margin-left:auto;margin-right:auto;height:60%" src="{{site.baseurl}}/assets/images/detection.png"/>
</figure>
</br></br>
<p>Both tracks feature the same datasets and focus on synthetic-to-real domain adaptation. Participants will be given three datasets, each containing the same object categories:</p>
<ul>
<li> <b> training domain (source)</b>: synthetic 2D renderings of 3D models generated from different angles and with different lighting conditions </li>
<li> <b> validation domain (target)</b>: a photo-realistic or real-image validation domain that participants can use to evaluate performance of their domain adaptation methods </li>
<li> <b> test domain (target)</b>: a new real-image test domain, <b> different from the validation domain and without labels </b>. The test set will be released shortly before the end of the competition </li>
</ul>
<p> The reason for using different target domains for validation and test is to evaluate the performance of proposed models as an out-of-the-box domain adaptation tool. This setting more closely mimics realistic deployment scenarios where the target domain is unknown at training time and discourages algorithms that are designed to handle a particular target domain. </p>
</br>
<a name = "data"></a>
<h2 class="section-title"> Data </h2>
<p>We use a recent <a href="http://ai.bu.edu/syn2real/index.html">Syn2Real</a> dataset collected by our team. Please follow the instructions outlined in the <a href = "https://github.com/VisionLearningGroup/visda-2018-public">VisDA GitHub repository</a> to download data and development kits for the classification and segmentation tracks. Training and validation data are now available to download. We have also included baseline models and instructions on training several existing domain adaptation methods.</p>
</br>
<a name = "prizes"></a>
<h2 class="section-title"> Prizes </h2>
<p> We are excited to announce NVIDIA GPU prizes for the top performing teams! The top 3 winners of both the openset and detection tracks will receive:</p>
<ul>
<li><b>1st place:</b> Titan Xp</li>
<li><b>2nd place:</b> GTX 1080 Ti</li>
<li><b>3rd place:</b> GTX 1060 6GB</li>
</ul>
</br>
<!--
<a name = "browse"></a>
<h2 class="section-title"> Browse Data </h2>
<p > If you would like to browse sample images from each of the datasets currently available please use the links below: </p>
<br>
<p> Classification challenge: </p>
<ul>
<li><a href="pages/classification_train/aeroplane.html">training domain</a></li>
<li><a href="pages/classification_validation/aeroplane.html">validation domain</a></li>
</ul>
<p> Segmentation challenge: </p>
<ul>
<li><a href = "pages/segmentation_train/segmentation_train.html">training domain</a></li>
<li><a href = "pages/segmentation_validation/segmentation_validation.html">validation domain</a></li>
</ul>
</br>
<a name = "download"></a>
<h2 class="section-title"> Download </h2>
<p>Please follow the instructions outlined in the <a href = "https://github.com/VisionLearningGroup/taskcv-2017-public">VisDA GitHub repository</a> to download data and development kits for the classification and segmentation tracks. Training, validation and testing data are now available to download. We have also included baseline models and instructions on training several existing domain adaptation methods.</p>
<p>The DevKits are currently in beta-release. If you find any bugs, please open an issue in our GitHub repo rather than emailing the organizers directly.</p>
<p> You can get the <b><a href="https://arxiv.org/abs/1710.06924">tech report</a></b> describing dataset and baseline experiments from
<b><a href="https://arxiv.org/abs/1710.06924">arxiv</a></b>. If you use data, code or its derivatives, please consider citing:
</p>
<pre style="overflow: auto; word-wrap: normal; white-space: pre;"><code>@misc{visda2017,
Author = {Xingchao Peng and Ben Usman and Neela Kaushik and Judy Hoffman and Dequan Wang and Kate Saenko},
Title = {VisDA: The Visual Domain Adaptation Challenge},
Year = {2017},
Eprint = {arXiv:1710.06924},
}</code></pre>
</br>
-->
<a name = "evaluation"></a>
<h2 class="section-title"> Evaluation </h2>
<p> We will use CodaLab to evaluate submissions and maintain a leaderboard. To register for the evaluation server, please create an account on <a href = "https://competitions.codalab.org/">CodaLab</a> and and enter as a participant in one of the following competitions: </p>
<ul>
<li><a href = "https://competitions.codalab.org/competitions/19113">Open-Set Classification Challenge</a></li>
<li><a href = "https://competitions.codalab.org/competitions/18892">Detection Challenge</a></li>
</ul>
<p>If you are working as a team, you have the option to register for one account for your team or register multiple accounts under the same team name. If you choose to use one account, please indicate the names of all of the members on your team. This can be modified in the “User Settings” tab. If your team registers for multiple accounts, please do so using the protocol explained by CodaLab <a href = "https://github.com/codalab/codalab-competitions/wiki/User_Teams">here</a>. Regardless of whether you register for one or multiple accounts, your team must adhere to the per-team submission limits (20 entries per day per team during the validation phase). </p>
<p>Please refer to the instructions in the <a href = "https://github.com/VisionLearningGroup/visda-2018-public">DevKit</a> ReadMe file for specific details on submission formatting and evaluation for the classification and segmentation challenges.</p>
</br>
<a name = "rules"></a>
<h2 class="section-title"> Rules </h2>
<p>The VisDA challenge tests adaptation and model transfer, so the rules are different than most challenges. Please read them carefully.</p>
<p><b>Supervised Training:</b> Teams may only submit test results of models trained on the source domain data and optionally pre-trained on ImageNet. Training on the validation dataset is not allowed for test submissions. Note, this may be different from other challenges. In this year’s VisDA challenge, the goal is to test how well models can adapt from synthetic to real data. Therefore, training on the validation domain is not allowed. To ensure equal comparison, we also do not allow any other external training data, modifying the provided training dataset, or any form of manual data labeling. </p>
<p><b>Pretraining on ImageNet:</b> If pre-training on the <a href = "http://image-net.org/challenges/LSVRC/2012/">ImageNet ILSVRC</a> classification training data, only the weights can be transferred, not the actual classifiers for specific objects, i.e. participants should not manually exploit correspondences between ImageNet output labels and labels in the data. Please indicate in your method description which pre-trained weights were used for initialization of the model. Teams who place in the top of the “no ImageNet pretraining” track will receive special recognition. </p>
<p><b>Unsupervised training:</b> Models can be adapted (trained) on the test data in an unsupervised way, i.e. without labels. Adaptation, even unsupervised, on the validation data is not allowed for test submissions. Note, we have released the validation labels to facilitate algorithm development. </p>
<p><b>Source Models:</b> The performance of a domain adaptation algorithm greatly depends on the baseline performance of the model trained only on source data. We ask that teams submit two sets of results: 1) predictions obtained only with the source-trained model, and 2) predictions obtained with the adapted model. See the development kit for submission formatting details. </p>
<p><b>Leaderboard:</b>The main leaderboard for each competition track will show results of adapted models and will be used to determine the final team ranks. The expanded leaderboard will additionally show the team's source-only models, i.e. those trained only on the source domain without any adaptation. These results are useful for estimating how much the method improves upon its source-only model, but will not be used to determine team ranks. </p>
</br>
<a name = "faq"></a>
<h2 class="section-title"> FAQ </h2>
<ol>
<li><b>Can we train models on data other than the source domain?</b></li>
<p> Participants may elect to pre-train their models only on ImageNet. Please refer to the challenge evaluation instructions found in the <a href = "#data">DevKit</a> for more details. </p>
<li><b>Do we have to use the provided baseline models?</b></li>
<p>No, these are provided for your convenience and are optional. </p>
<li><b>How many submissions can each team submit per competition track?</b></li>
<p>For the validation domain, the number of submissions per team is limited to 20 upload per day and there are no restrictions on total number of submissions. For the test domain, the number of submissions per team is limited to 1 upload per day and 20 uploads in total. Only one account per team must be used to submit results. Do not create multiple accounts for a single project to circumvent this limit, as this will result in disqualification. </p>
<li><b>Can multiple teams enter from the same research group?</b></li>
<p>Yes, so long as each team is comprised of different members.</p>
<li><b>Can external data be used?</b></li>
<p>The allowed training data consists of the VisDA 2018 Training set. The VisDA 2018 Validation set can be used to test adaptation to a target domain offline, but cannot be used to train the final submitted model (with or without labels). Optional initialization of models with weights pre-trained on ImageNet is allowed and must be declared in the submission. Please see the <a href = "#rules">challenge rules</a> for more details. </p>
<li><b>Are challenge participants required to reveal all details of their methods?</b></li>
<p>Participants are encouraged but not required to include a brief write-up regarding their methods when submitting their results. </p>
<li><b>Do participants need to adhere to TASK-CV abstract submission deadlines to participate in the challenge?</b></li>
<p>Submission of a <a href = "https://sites.google.com/view/task-cv2018/home">TASK-CV</a> workshop abstract is not mandatory to participate in the challenge; however, we request that any teams that wish to be considered for prizes or receive invitation to speak at the workshop submit a 2-page abstract. The top-performing teams that submit abstracts will be invited to present their approaches at the workshop. </p>
</br>
<a name = "workshop"></a>
<h2 class="section-title"> Workshop </h2>
<p>The challenge is associated with the 5th annual <a href="https://sites.google.com/view/task-cv2018/home">TASK-CV</a> workshop,
being held at <a href = "http://iccv2017.thecvf.com/"> ECCV 2018</a> in Munich, Germany. Challenge participants are
encouraged to submit abstracts to the main TASK-CV workshop. In order to be considered for any challenge prizes,
as well as recieve an invitivation to give a talk about their results at the special VISDA session of the workshop,
participants are requested to submit a 2-page abstract directly via email to visda-organizers@googlegroups.com,
within 1 week of the challenge end.
</p>
<hr>
</br>
<a name = "organizers"></a>
<h2 class="section-title"> Organizers </h2>
Kate Saenko (Boston University),
Ben Usman (Boston University),
Xingchao Peng (Boston University),
Neela Kaushik (Boston University),
Kuniaki Saito (The University of Tokyo),
Judy Hoffman (UC Berkeley)
</br>
<!--
</br>
<a name = "sponsors"></a>
<h2 class="section-title"> Sponsors </h2>
<img src="{{site.baseurl}}/assets/images/nsf_logo.png" style="float: left; width: 20%; margin-right: 5%; margin-bottom: 0.5em;">
<img src="{{site.baseurl}}/assets/images/Nexar-Logo.jpg" style="float: left; width: 40%; margin-right: 5%; margin-bottom: 0.5em;">
<p style="clear: both;">
</br></br></br>
<h2 class="section-title"> References </h2>
[1] FCNs in the Wild: Pixel-level Adversarial and Constraint-based Adaptation, Judy Hoffman, Dequan Wang, Fisher Yu, Trevor Darrell
</br></br>
-->
<div height=500px> </div>
</div><!--//summary-->
</section><!--//section-->