-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
233 lines (194 loc) · 12.8 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
---
layout: default
---
<section style="text-align: justify">
<h1> Visual Domain Adaptation Challenge </h1>
<h2> (VisDA-2022) </h2>
</br>
<!-- [<a href = "#news">News</a>]
[<a href = "#overview">Overview</a>]
[<a href = "#data">Data</a>]
[<a href = "https://github.com/VisionLearningGroup/visda-2019-public">Data and Code</a>]
[<a href= "#prizes">Prizes</a>]
[<a href = "#evaluation">Evaluation</a>]
[<a href = "#rules">Rules</a>]
[<a href = "#faq">FAQ</a>]
[<a href = "https://sites.google.com/view/task-cv2019">TASK-CV Workshop</a>]
[<a href = "#organizers">Organizers</a>]
[<a href = "#sponsors">Sponsors</a>]
</br> </br> </nr>
<a name = "news"></a>
-->
<h2 class="section-title"> Workshop </h2>
The online NeurIPS'22 workshop for VisDA-2022 was on Thursday, December 8th 2022 at 21:00 UTC (16:00 EST/15:00 CST). Find the workshop recording <a href="https://youtu.be/ewPHzZo78_0"><b>here</b></a>.
Click <a href="https://recorder-v3.slideslive.com/?share=78449&s=2dde8157-803e-4828-b64d-93297fb20aad">here</a> to see a quick overview of the competition.
<figure>
<img class="img-responsive" style="display:block;margin-left:auto;margin-right:auto" src="assets/images/visda_2022_workshop_ad.png"/>
</figure>
</br> </br> <b style="font-size: 20px;"> Leaderboard </b> </br> </br>
<table style="width:100%" class="table">
<tr class="active">
<th width='3%'>#</th>
<th width='25%'>Team Name</th>
<th>Affiliation</th>
<th width='18%'>mIoU / Pixel Acc (%)</th>
</tr>
<tr>
<td> 1 </td>
<td>SI Analytics <a href="https://github.com/DaehanKim-Korea/VisDA2022_1st_Place_Solution">[code]</a> </td>
<td>SI Analytics, Hanbat National University </br> <i>Daehan Kim, Minseok Seo, YoungJin Jeon, Dong-Geol Choi </i> </td>
<td>59.42 / 93.18</td>
</tr>
<tr>
<td> 2 </td>
<td> Pros <a href="https://github.com/shahaf1313/VisDAChallenge">[code]</a> </td>
<td>Tel-Aviv University </br> <i>Shahaf Ettedgui, Shady Abu-Hussein, Raja Giryes</i> </td>
<td>55.46 / 92.59</td>
</tr>
<tr>
<td> 3 </td>
<td> BIT-DA <a href="https://github.com/BinhuiXie/visda2022-3rd-place">[code]</a> </td>
<td>Beijing Institute of Technology </br> <i>Binhui Xie, Mingjia Li, Qingju Guo, Shuang Li</i> </td>
<td>54.38 / 91.80</td>
</tr>
</table>
<b style="font-size: 20px;"> Schedule </b> </br> </br>
<table style="width:100%" class="table">
<tr class="active">
<th width='15%'>Time (CST)</th>
<th width='65%'>Title</th>
<th>Presenter(s)</th>
</tr>
<tr>
<td>3:00 p.m. - 3:15 p.m.</td>
<td>Challenge Introduction (Talk)</td>
<td>Dina Bashkirova</td>
</tr>
<tr>
<td>3:15 p.m. - 4:00 p.m.</td>
<td>Presentations from Winners (Presentations + Q&A)</td>
<td>Binhui Xie · Shahaf Ettedgui · Daehan Kim</td>
</tr>
<tr>
<td>4:00 p.m. - 4:20 p.m.</td>
<td>Subhransu Maji: Learning to Track Birds with Weather RADARs (Talk)</td>
<td><a href="https://people.cs.umass.edu/~smaji/"><b>Subhransu Maji</b></a></td>
</tr>
<tr>
<td>4:20 p.m. - 4:40 p.m.</td>
<td>Colorado Reed: From AI for Science to Science for AI. A tale of ice, fire, and domain adaptation (Talk)</td>
<td><a href="https://people.eecs.berkeley.edu/~cjrd/"><b>Colorado Reed</b></a></td>
</tr>
<tr>
<td>4:40 p.m. - 5:00 p.m.</td>
<td>Amanda Marrs: Applying AI and Computer Vision to Modernize Recycling and Increase Landfill Diversion (Talk)</td>
<td><a href="https://www.amprobotics.com/managementamanda-marrs"><b>Amanda Marrs</b></a></td>
</tr>
<tr>
<td>5:00 p.m. - 5:20 p.m.</td>
<td>Ian Goodine, Ethan Walko: Decentralized Waste Sorting (Talk)</td>
<td>Ian Goodine</td>
</tr>
<tr>
<td>5:20 p.m. - 5:40 p.m.</td>
<td>Sujit Sanjeev: Introduction to CircularNet (Talk)</td>
<td>Sujit Sanjeev</td>
</tr>
<tr>
<td>5:40 p.m. - 6:00 p.m.</td>
<td>Final Q&A and Discussion Session (Discussion Panel)</td>
<td>All</td>
</tr>
</table>
<!-- <ul>
<li> Introduction from VisDA organizers (15 minutes) </li>
<li> Presentations from the winning teams (40 minutes):
<ul>
<li>SI Analytics</li>
<li>Pros</li>
<li>BIT-DA</li>
</ul>
</li>
<li> Invited Speakers :
<ul>
<li>Subhransu Maji, UMass Amherst, (20 minutes)</li>
<li>Colorado Reed, UC Berkeley, (20 minutes) </li>
<li>Amanda Marrs, AMP Robotics, (20 minutes)</li>
<li>Ian Goodine, Ethan Walko, rStream Recycling, (20 minutes)</li>
<li>Sujit Sanjeev, gTech Sustainability, Google, (20 minutes)</li>
</ul>
</li>
<li>Additional Q&A period, (15 minutes)</li>
</ul> -->
<!--
<b style="font-size: 20px;"> Schedule </b> </br> </br>
<ul>
<li> <i>Haojin Liao</i>: <b>3rd Place</b> <a href="https://www.youtube.com/watch?v=6S5woHLSqpQ&t=304s">[video]</a> </li>
<li> <i>Chandramouli Rajagopalan</i>: <b>2nd Place</b> <a href="https://www.youtube.com/watch?v=6S5woHLSqpQ&t=1372s">[video]</a> </li>
<li> <i>Burhan Ul Tayyab</i>: <b>1st Place</b> <a href="https://www.youtube.com/watch?v=6S5woHLSqpQ&t=2124s">[video]</a> <a href="https://https://ai.bu.edu/visda-2021/assets/pdf/Burhan_Slides.pdf">[slides]</a> <a href="https://ai.bu.edu/visda-2021/assets/pdf/Burhan_Report.pdf">[pdf]</a> </li>
</ul>
-->
<h2 class="section-title"> Overview </h2>
<a name = "overview"></a>
<p>
Welcome to the Visual Domain Adaptation 2022 Challenge! This year, our challenge brings domain adaptation closer to real-world applications, as we propose the task of semantic segmentation for industrial waste sorting. It is the 6th edition of the challenge and we look forward to participation from a large and growing NeurIPS community with a breadth of backgrounds.
</p>
<p>
In industrial waste sorting, it is impossible to collect a gold standard dataset that fully represents the task, since the visual appearance of the waste stream as well as its content changes overtime and depends on the specific location, season, machinery in use, and many other factors, all of which introduce a natural domain shift between any source/training and target distributions. Therefore, this year, we challenge the computer vision community to come up with effective solutions for the domain adaptation problem in this real-world application. Our challenge consists of a large-scale synthetic SynthWaste dataset as well as the real ZeroWaste dataset as source domains, and ZeroWasteV2, the real dataset collected at a different location and season, as a target domain. Additionally, we provide SynthWaste-aug, a version of SynthWaste augmented with instance-level style transfer to further increase the diversity of the synthetic dataset. The goal of this challenge is to answer the question: can synthetic data improve performance on this task and help adapt to changing data distributions? We invite the teams to help answer this question and facilitate research aimed at solving the automated waste sorting problem.
</p>
<figure>
<img class="img-responsive" style="display:block;margin-left:auto;margin-right:auto" src="assets/images/zerowaste_domains.png"/>
</figure>
</br>
</br>
<a name = "announcements"></a>
<h2 class="section-title"> Announcements </h2>
<p>
<li> <b> June 6th: </b> Register your teams <a href="https://forms.gle/R7c8zJQbX92qW8X2A">[here]</a>.
<li> <b> June 24th: </b> Training data and starter code available <a href="https://github.com/dbash/visda2022-org">[here]</a>. Evaluation server coming soon.
<li> <b> June 30th: </b> Evaluation server and interface is up on <a href="https://eval.ai/web/challenges/challenge-page/1806/overview">eval.ai</a>. Find submission instructions <a href="https://github.com/dbash/visda2022-org#submit-on-evalai">[here]</a>.
<li> <b> Sept 30th: </b> Phase-2 begins. Test data released. See instructions <a href="https://github.com/dbash/visda2022-org#setup-datasets">here</a>.
<li> <b> Oct 10th: </b> Phase-2 wraps up. Final results announced.
<!-- <ul>
<li> <b> June 23rd: </b> the official devkit and with data urls are released on <a href="https://github.com/VisionLearningGroup/visda21-dev">[github]</a>. </li>
<li> <b> July 7th: </b> the evaluation server and the leaderboard are up on <a href="https://competitions.codalab.org/competitions/33396">[codalab]</a>
<li> <b> July 26th: </b> the technical report describing the challenge is avaliable on <a href="https://arxiv.org/abs/2107.11011">[arxiv]</a>.
<li> <b> July 28th: </b> the <a href="#prizes">[prize fund]</a> is finalized
<li> <b> Aug 10th: </b> the <a href="#rules">[rules]</a> regarding ensembling and energy efficiency are updated.
<li> <b> Sept 30th: </b> <a href="#data">[test data]</a> and an example submission are released, test leaderboard is live.
</ul> -->
</p>
</br>
</br>
<a name = "prizes"></a>
<h2 class="section-title"> Prizes </h2>
<p> The top three teams will receive pre-paid VISA gift cards: $2000 for the 1st place, $500 for 2nd and 3rd. </p> </br>
</br>
</br>
<a name = "rules"></a>
<h2 class="section-title"> Rules </h2>
<ul>
<li>Participants from both academic and industrial institutions are welcome to participate in our challenge. The participating teams agree to publish the full code that allows reproducibility in case of a victory (top 3 solutions). The list of team members must be provided during registration, and cannot be changed throughout the competition. Each individual can only participate in one team and should provide the institution / corporate email and a phone number at registration. </li>
<li>Teams may use any publicly available and appropriately licensed data to train their models in addition to the ones provided by the organizers. Supervised training on the validation set of the target domain is not allowed in this competition.</li>
<li>Models can be adapted using the unsupervised dataset from the target domain without access to the labels. For example, solutions that involve supervised training on the validation set or manual labeling of the target domain will be disqualified.</li>
<li>The choice of segmentation backbone greatly affects the domain adaptation results. To allow fair comparison, participants will be asked to submit the following results: source-only predictions of a backbone model trained only on data from the source domains, as well as the domain adaptation results of the model trained on source and unlabeled target data. </li>
<li>The winning solutions will be determined based on the test mIoU of the adapted model on target data. In an unlikely event of a tie, mean pixel accuracy will be used to break the tie. </li>
<li>To encourage fair competition regardless of affiliation and compute capabilities, we limit the overall model size to 300 million parameters. If multiple distinct models are used, the total number of parameters will be counted according to the number of forward passes of the same input batch. For example, if the same input example is passed to the model that has 150 millions of parameters twice, it will count as 300 millions of parameters in total. Note that for a segmentation model, we shall count this per-pixel, meaning, the number of parameters is the maximum value of forward passes times number of model parameters for any given pixel. Hence, if you choose to do inference over multiple <b>non-overlapping</b> windows for a single image using a model with 150 million parameters, although you may run a forward pass multiple times, any individual pixel of the image is processed by a model of at most 150 million parameters, which keeps it under the limit of 300 million. </li>
<li>Reproducibility is the responsibility of the winning teams. Top-3 winning solutions must submit the full code along with a complete list of instructions / script on how to reproduce their results, ideally with the exact random seeds used to get the best result. If the organizing committee determines that the submitted code runs with errors or does not yield results comparable to those in the final leaderboard and the team is not willing to cooperate, it will be disqualified, and the winning place will go to the next team in the leaderboard. </li>
<li>Energy efficiency: Teams must report the total training time of the submitted model which should be reasonable (which we define as not exceeding 100 GPU days of V100 (16GB version) but contact us if unsure). Energy efficient solutions will be highlighted even if they do not finish in the top three. </li>
<li>In Phase 2 of the challenge, each team is allowed a maximum of 5 submissions.
</ul>
</br>
</br>
<a name = "sponsors"></a>
<h2 class="section-title"> Sponsors </h2>
<img src="assets/images/boston_univ_rgb.gif" height="100">
</br>
</br>
<a name = "organizers"></a>
<h2 class="section-title"> Organizers </h2>
Dina Bashkirova (BU), Piotr Teterwak (BU), Samarth Mishra (BU), Donghyun Kim (BU), Diala Lteif (BU), Rachel Lai (BU), Fadi Alladkani (WPI), James Akl (WPI), Berk Calli (WPI), Vitaly Ablavsky (UW), Sarah Adel Bargal (Georgetown Univ.), Kate Saenko (BU & MIT-IBM Watson AI)
</br>
</br>
</div><!--//summary-->
</section><!--//section-->