-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.html
402 lines (359 loc) · 19.6 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type" />
<title>Kunpeng Li</title>
<meta content="Kunpeng Li, KunpengLi1994.github.io" name="keywords" />
<style media="screen" type="text/css">html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, font, img, ins, kbd, q, s, samp, small, strike, strong, sub, tt, var, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td {
border: 0pt none;
font-family: inherit;
font-size: 100%;
font-style: inherit;
font-weight: inherit;
margin: 0pt;
outline-color: invert;
outline-style: none;
outline-width: 0pt;
padding: 0pt;
vertical-align: baseline;
}
a {
color: #1772d0;
text-decoration:none;
}
a:focus, a:hover {
color: #f09228;
text-decoration:none;
}
a.paper {
font-weight: bold;
font-size: 14pt;
}
b.paper {
font-weight: bold;
font-size: 14pt;
}
* {
margin: 0pt;
padding: 0pt;
}
body {
position: relative;
margin: 3em auto 2em auto;
width: 1000px;
font-family: Lato, Verdana, Helvetica, sans-serif;
font-size: 16px;
background: #eee;
}
h2 {
font-family: Lato, Verdana, Helvetica, sans-serif;
font-size: 17pt;
font-weight: 700;
}
h3 {
font-family: Lato, Verdana, Helvetica, sans-serif;
font-size: 18px;
font-weight: 700;
}
strong {
font-family: Lato, Verdana, Helvetica, sans-serif;
font-size: 15px;
font-weight:bold;
}
ul {
list-style: circle;
}
img {
border: none;
}
li {
padding-bottom: 0.5em;
margin-left: 1.4em;
}
alert {
font-family: Lato, Verdana, Helvetica, sans-serif;
font-size: 15px;
font-weight: bold;
color: #FF0000;
}
em, i {
font-style:italic;
}
div.section {
clear: both;
margin-bottom: 1.5em;
background: #eee;
}
div.spanner {
clear: both;
}
div.paper {
clear: both;
margin-top: 0.5em;
margin-bottom: 1em;
border: 1px solid #ddd;
background: #fff;
padding: 1em 1em 1em 1em;
}
div.paper div {
padding-left: 230px;
}
img.paper {
margin-bottom: 0.5em;
float: left;
width: 200px;
}
span.blurb {
font-style:italic;
display:block;
margin-top:0.75em;
margin-bottom:0.5em;
}
pre, code {
font-family: 'Lucida Console', 'Andale Mono', 'Courier', monospaced;
margin: 1em 0;
padding: 0;
}
div.paper pre {
font-size: 0.9em;
}
</style>
<link href="http://fonts.googleapis.com/css?family=Lato:400,700,400italic,700italic" rel="stylesheet" type="text/css" /><!--<link href='http://fonts.googleapis.com/css?family=Open+Sans+Condensed:300' rel='stylesheet' type='text/css'>--><!--<link href='http://fonts.googleapis.com/css?family=Open+Sans' rel='stylesheet' type='text/css'>--><!--<link href='http://fonts.googleapis.com/css?family=Yanone+Kaffeesatz' rel='stylesheet' type='text/css'>-->
</head>
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-45959174-3', 'kailigo.github.io');
ga('send', 'pageview');
</script>
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-66888300-1', 'auto');
ga('send', 'pageview');
</script>
<body>
<div style="margin-bottom: 1em; border: 1px solid #ddd; background-color: #fff; padding: 1em; height: 140px;">
<div style="margin: 0px auto; width: 100%;">
<img title="Kunpeng" style="float: left; padding-left: .5em; height: 140px;" src="KunpengLi.JPG" />
<div style="padding-left: 12em; vertical-align: top; height: 120px;"><span style="line-height: 150%; font-size: 20pt;">Kunpeng Li</span><br />
<span><strong>Research Scientist</strong></span><br />
<!--<span><a href='http://cvrs.whu.edu.cn/'>Computer Vision & Remote Sensing (CVRS) Lab </a> <br /> </q>-->
<span>Meta Generative AI</span><br />
<!-- <span><strong>Office</strong>: Richard Center, 360 Huntington Ave, Boston, MA 02115</span><br /> -->
<span><strong>Email </strong>: kunpengli [at] meta.com,   kinpeng.li.1994 [at] gmail.com <br />
<strong><a href='https://github.com/KunpengLi1994'>Github</a></strong>    
<strong><a href='https://scholar.google.com/citations?hl=EN&user=0TOLw9YAAAAJ&view_op=list_works'>Google Scholar</a></strong></span><br/>
<!-- <span><strong> <a href='https://github.com/kailigo'>Github</a> </strong></span> <br /> -->
<!-- <span><strong> <a href='https://scholar.google.com/citations?hl=en&user=YsROc4UAAAAJ'>Google Scholar</a> </strong></span> <br /> -->
</div>
</div>
</div>
<!--<div style="clear: both; background-color: #fff; margin-top: 1.5em; padding: .2em; padding-left: .3em;">-->
<div style="clear: both;">
<div class="section">
<h2>About Me</h2>
<div class="paper">
I am a Research Scientist at Meta Generative AI (<a href='https://research.fb.com/people/li-kunpeng/'>FB page</a>).
My current research interests lie in Computer Vision and Deep Learning, especially learning with limited supervision, vision & language, Generative AI, scene understanding and etc.
<br /> <br /> I received my Ph.D degree from Electrical and Computer Engineering Department, <a href='http://www.northeastern.edu/'>Northeastern University</a>, advised by <a href='http://www1.ece.neu.edu/~yunfu/'> Prof. Yun Fu</a>. Please find the webpage of our lab here <a href='https://web.northeastern.edu/smilelab/'>SmileLab</a>.
I have also spent time at <a href='https://research.google/teams/cloud-ai/'>Google Research</a>, <a href='https://research.adobe.com/'>Adobe Research</a>, <a href='https://ailab.bytedance.com/'>Bytedance AI Lab U.S.</a> and <a href='https://new.siemens.com/global/en/company/innovation/corporate-technology.html'>Siemens Research</a> as a research intern.
<!-- Prior to this, I obtained my bachelor degree from School of Electronic and Information Engineering, <a href='https://www.scut.edu.cn/new/'>South China University of Technology</a>, supervised by Prof. Xin Zhang and <a href='http://www.hcii-lab.net/lianwen/'>Prof. Lianwen Jin</a>. -->
</div>
</div>
</div>
<div style="clear: both;">
<div class="section">
<h2>News</h2>
<div class="paper">
<ul>
<!-- <li> 2018.07: One paper is accepted by <strong> ECCV 2018 </strong>.</li>
<li> 2018.07: We have one paper accepted by <strong> ACM MM 2018 </strong> .</li> -->
<li> 2024.04: Core contributor of Text-to-Image foundation model that has been shipped to <a href='https://www.meta.ai/?icebreaker=imagine'>[Meta AI]</a> as well as Instagram, WhatsApp, and Messenger. </li>
<li> 2024.02: We have two papers accepted by <strong>CVPR 2024</strong> </li>
<li> 2023.02: We have one paper accepted by <strong>CVPR 2023</strong>, welcome to check our Open-vocabulary Segmentation <a href='https://jeff-liangf.github.io/projects/ovseg/'>[Project]</a> and <a href='https://huggingface.co/spaces/facebook/ov-seg'>[Demo]</a>. </li>
<li> 2023.02: We have one paper accepted by <strong>ICLR 2023</strong> </li>
<li> 2022.10: Core contributor of <a href='https://www.youtube.com/watch?v=j6LY5YWi4o8'>[Face Tracking model]</a> that has been shiped to <a href='https://www.meta.com/blog/quest/meta-quest-pro-privacy/?utm_source=www.meta.com&utm_medium=oculusredirect'>[Meta Quest Pro]</a> to enable avatars to more closely mirror people's expressions in VR. </li>
<li> 2022.09: We have one paper accepted by <strong>TOMM</strong>.</li>
<li> 2022.02: We have one paper accepted by <strong>ICLR 2022</strong>.</li>
<li> 2022.01: One paper is accepted by <strong>IEEE TPAMI </strong> and one is accepted by <strong>AAAI 2022</strong> <a href='https://ai.googleblog.com/2022/03/learning-from-weakly-labeled-videos-via.html'>[Post]</a>.</li>
<li> 2021.09: Serve as a <strong>Senior Program Committee (SPC)</strong> member for IJCAI from 2022 to 2024.</li>
<li> 2021.09: Join <strong>Facebook Reality Labs (FRL)</strong>, CA as a Research Scientist.</li>
<li> 2021.08: Defense my PhD and graduate from <strong>Northeastern University</strong>, Good bye Boston.</li>
<li> 2021.06: One paper is accepted by <strong>IEEE TIP</strong>.</li>
<li> 2021.04: Receive the <strong>Outstanding Graduate Research Award</strong> from College of Engineering, NEU.</li>
<li> 2021.03: Our team won the <strong>1st Prize</strong> in both RGB and RGB-D tracks of CVPR21 Challenge on Large Scale Signer Independent Isolated Sign Language Recognition <a href='https://github.com/jackyjsy/CVPR21Chal-SLR'>[code]</a> <a href='http://chalearnlap.cvc.uab.es/challenge/43/description/'>[link]</a> <a href='https://coe.northeastern.edu/news/smile-lab-wins-1st-at-conference-on-computer-vision-and-pattern-recognition/'>[news]</a>.</li>
<li> 2021.03: One paper is accepted by <strong>CVPR 2021</strong>.</li>
<li> 2020.11: One paper is accepted by <strong>Nature: Commun. Biol</strong>.</li>
<li> 2020.09: One paper is accepted by <strong>IEEE TNNLS</strong>.</li>
<li> 2020.03: Two papers are accepted by <strong>CVPR 2020</strong>.</li>
<li> 2020.02: One paper is accepted by <strong>Lab on a Chip</strong>, one of the top journals in Chemical/Biomedical Engineering.</li>
<li> 2020.01: Start my internship at <strong>Google AI Research</strong>, Sunnyvale, CA.</li>
<li> 2019.07: Two papers are accepted by <strong>ICCV 2019</strong>, one as <strong> Oral </strong> and one as Poster.</li>
<li> 2019.06: One paper is accepted by <strong>IEEE TPAMI </strong>.</li>
<li> 2019.05: Start my internship at <strong>Bytedance AI Lab</strong>, Palo Alto, CA.</li>
<li> 2018.05: Start my internship at <strong>Adobe Research</strong>, San Jose, CA.</li>
<li> 2018.03: One paper is accepted by <strong> CVPR 2018 </strong> as <strong> spotlight </strong>.</li>
<li> 2017.05: Start my internship at <strong>Siemens Research</strong>, Princeton, NJ.</li>
<li> 2017.04: One paper is accepted by <strong> IJCAI 2017 </strong> .</li>
<li> 2016.09: Begin my new journey at <strong>Northeastern University</strong>, Boston, MA.</li>
</ul>
</div>
</div>
</div>
<!-- <div style="clear: both;">
<div class="section">
<h2 id="confpapers">Intern Experience</h2>
-->
<div style="clear: both;">
<div class="section">
<!-- <h2 id="confpapers">Previous Selected Publications</h2> -->
<h2>Previous Selected Publications [<a href='https://scholar.google.com/citations?hl=zh-CN&user=0TOLw9YAAAAJ'>Full List and Recent Works</a>]</h2>
<div class="paper" id="pstuts"><img class="paper" src="./pic/pstuts.png" title="Screencast Tutorial Video Understanding">
<div> <strong>Screencast Tutorial Video Understanding</strong><br>
<strong><u>Kunpeng Li</u></strong>, Chen Fang, Zhaowen Wang, Seokhwan Kim, Hailin Jin, Yun Fu <br>
IEEE Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>), 2020
<a href='https://openaccess.thecvf.com/content_CVPR_2020/papers/Li_Screencast_Tutorial_Video_Understanding_CVPR_2020_paper.pdf'>[PDF]</a>,
<a href='https://github.com/KunpengLi1994/PsTuts'>[Code]</a>,
<a href='https://sites.google.com/view/pstuts/'>[Project]</a>
</div>
<div class="spanner"></div>
</div>
<div class="paper" id="VSRN"><img class="paper" src="./pic/VSRN.png" title="Visual Semantic Reasoning for Image-Text Matching">
<div> <strong>Visual Semantic Reasoning for Image-Text Matching</strong><br>
<strong><u>Kunpeng Li</u></strong>, Yulun Zhang, Kai Li, Yuanyuan Li, Yun Fu <br>
International Conference on Computer Vision (<strong>ICCV</strong>), 2019 <strong>(Oral)</strong>
<a href='https://arxiv.org/pdf/1909.02701.pdf'>[PDF]</a>,
<a href='https://github.com/KunpengLi1994/VSRN'>[Code]</a>,
<a href='https://youtu.be/oFDF1yT0T-4?t=2704 '>[Talk]</a>
</div>
<div class="spanner"></div>
</div>
<div class="paper" id="AttnBN"><img class="paper" src="./pic/AttnBN.png" title="Attention Bridging Network for Knowledge Transfer">
<div> <strong>Attention Bridging Network for Knowledge Transfer</strong><br>
<strong><u>Kunpeng Li</u></strong>, Yulun Zhang, Kai Li, Yuanyuan Li, Yun Fu <br>
International Conference on Computer Vision (<strong>ICCV</strong>), 2019
<a href='http://openaccess.thecvf.com/content_ICCV_2019/papers/Li_Attention_Bridging_Network_for_Knowledge_Transfer_ICCV_2019_paper.pdf'>[PDF]</a>,
<a href='https://drive.google.com/file/d/1Hm9aWGGvsCD_ixrovuRXGFygP-SlahI9/view'>[Single-label Data]</a>
<a href='https://drive.google.com/file/d/1svFqgz0ddn-9x30fzoeKmEgI7A8aFne6/view'>[Localization Cues]</a>
<!-- [code]</a> -->
</div>
<div class="spanner"></div>
</div>
<div class="paper" id="RNAN"><img class="paper" src="./pic/RNAN.png" title="Residual Non-local Attention Networks for Image Restoration">
<div> <strong>Residual Non-local Attention Networks for Image Restoration</strong><br>
Yulun Zhang, <strong><u>Kunpeng Li</u></strong>, Kai Li, Bineng Zhong, Yun Fu <br>
International Conference on Learning Representations (<strong>ICLR</strong>), 2019
<a href='https://openreview.net/pdf?id=HkeGhoA5FX'>[PDF]</a>,
<a href='https://github.com/yulunzhang/RNAN'>[Code]</a>
</div>
<div class="spanner"></div>
</div>
<div class="paper" id="GAIN"><img class="paper" src="./pic/GAIN.png" title="Tell Me Where to Look: Guided Attention Inference Network">
<div> <strong>Tell Me Where to Look: Guided Attention Inference Network</strong><br>
<strong><u>Kunpeng Li</u></strong>, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst, Yun Fu <br>
IEEE Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>), 2018 <strong>(Spotlight)</strong>
<a href='http://openaccess.thecvf.com/content_cvpr_2018/papers/Li_Tell_Me_Where_CVPR_2018_paper.pdf'>[PDF]</a>,
<a href='https://github.com/alokwhitewolf/Guided-Attention-Inference-Network'>[Code]</a> by <a href='https://github.com/alokwhitewolf'>alokwhitewolf</a>,
<a href='https://www.youtube.com/watch?v=op9IBox_TTc'>[Talk]</a>,
<a href='https://press.siemens.com/global/en/pressrelease/siemens-extends-sinumerik-edge-include-more-applications-bringing-artificial'>[Siemens Product]</a>,
<a href='https://patents.google.com/patent/WO2019089192A1/en?oq=WO+2019%2f089192+Al'>[Patent]</a>
</div>
<br>
<div> <strong>Guided Attention Inference Network</strong><br>
<strong><u>Kunpeng Li</u></strong>, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst, Yun Fu <br>
IEEE Transactions on Pattern Analysis and Machine Intelligence (<strong>TPAMI</strong>), 2019
<a href='https://ieeexplore.ieee.org/abstract/document/8733010'>[Paper]</a>
</div>
<div class="spanner"></div>
</div>
<div class="paper" id="RCAN"><img class="paper" src="./pic/RCAN.png" title="Image Super-Resolution Using Very Deep Residual Channel Attention Networks">
<div> <strong>Image Super-Resolution Using Very Deep Residual Channel Attention Networks</strong><br>
Yulun Zhang, <strong><u>Kunpeng Li</u></strong>, Kai Li, Bineng Zhong, Yun Fu <br>
European Conference on Computer Vision (<strong>ECCV</strong>), 2018
<a href='https://arxiv.org/abs/1807.02758'>[PDF]</a>,
<a href='https://github.com/yulunzhang/RCAN'>[Code]</a>
</div>
<div class="spanner"></div>
</div>
<!-- <div class="paper" id="SN_reID"><img class="paper" src="./pic/SN_reID.png" title="Support Neighbor Loss for Person Re-Identification">
<div> <strong>Support Neighbor Loss for Person Re-Identification</strong><br>
Kai Li, Zhengming Ding, <strong><u>Kunpeng Li</u></strong>, Yulun Zhang, Yun Fu <br>
ACM Multimedia (<strong>ACM MM</strong>), 2018
<a href='https://arxiv.org/abs/1808.06030'>[PDF]</a>,
<a href='https://github.com/kailigo/SN_loss_for_reID'>[Code]</a>
</div>
<div class="spanner"></div>
</div>
-->
<div class="paper" id="MDSLT"><img class="paper" src="./pic/MDSLT.png" title="Multi-stream Deep Similarity Learning Networks for Visual Tracking">
<div> <strong>Multi-stream Deep Similarity Learning Networks for Visual Tracking</strong><br>
<strong><u>Kunpeng Li</u></strong>, Yu Kong, Yun Fu <br>
International Joint Conference on Artificial Intelligence (<strong>IJCAI</strong>), 2017
<a href='https://www.ijcai.org/proceedings/2017/0301.pdf'>[PDF]</a>,
<a href='https://drive.google.com/file/d/1WoUK3G4khzI_qw1T48hvxa-7yPYYv3Ua/view'>[Benchmark Results]</a>,
<a href='https://www.youtube.com/watch?v=UgrwdRQYAIA'>[Video Results]</a>
</div>
<div class="spanner"></div>
</div>
<div class="paper" id="AirWriting"><img class="paper" src="./pic/AirWriting.png" title="A New Fingertip Detection and Tracking Algorithm and Its Application on Writing-in-the-air System">
<div> <strong>A New Fingertip Detection and Tracking Algorithm and Its Application on Writing-in-the-air System</strong><br>
<strong><u>Kunpeng Li</u></strong>, Xin Zhang <br>
IEEE International Conference on Image and Signal Processing, 2014
<a href='https://ieeexplore.ieee.org/abstract/document/7003824'>[PDF]</a>,
<a href='https://drive.google.com/file/d/0B6GPXWGb8uiyUW8tN2tmUERxWmc/view'>[Extension]</a>,
<a href='https://www.youtube.com/watch?v=gCHIq0OQwUM'>[Demo]</a>
</div>
<div class="spanner"></div>
</div>
<div style="clear: both;">
<div class="section">
<h2 id="confpapers">Awards</h2>
<div class="paper">
<ul>
<li> College of Engineering Outstanding Graduate Research Award, 2021. </li>
<li> <strong>1st Prize</strong> in both RGB and RGB-D tracks of CVPR21 Challenge on Large Scale Signer Independent Isolated Sign Language Recognition <a href='https://github.com/jackyjsy/CVPR21Chal-SLR'>[code]</a> <a href='http://chalearnlap.cvc.uab.es/challenge/43/description/'>[link]</a> <a href='https://coe.northeastern.edu/news/smile-lab-wins-1st-at-conference-on-computer-vision-and-pattern-recognition/'>[news]</a>, 2021 </li>
<li> Travel award for attending ICCV, 2019. </li>
<li> Travel award for attending CVPR, 2018. </li>
<li> NSF travel award for attending IJCAI, 2017. </li>
<li> Dean's Fellowship, College of Engineering, Northeastern University, 2016. </li>
<li> First Prize in Digital Image Processing Contest of Guangdong Province, 2015. </li>
<li> National Scholarship, South China University of Technology, 2014. </li>
<li> Meritorious Winner in International Mathematical Contest in Modeling, 2014. </li>
<!-- <li> Best Undergraduate Thesis, School of Remote Sensing and Information Engineering, Wuhan University, 2014. </li> -->
<!-- <li> Excellent Undergraduate Students, Wuhan University, 2012. </li> -->
</ul>
</ul>
</div>
</div>
</div>
<div style="clear: both;">
<div class="section">
<h2 id="confpapers">Professonal Activities</h2>
<div class="paper">
<ul>
<p><font size="5">
<li> <strong>Area Chair or Senior Program Committee Member</strong> for IJCAI </li>
<li> <strong>PC Member or Reviewer</strong> for CVPR, ICCV, ECCV, NeurIPS, ICLR, ICML, AAAI, TPAMI, IJCV, TIP, TNNLS, TMI, TMM, TOMM, TKDD </li>
<!-- <li> External Reviewer for CVPR, ICCV, ECCV, AAAI, IJCAI, AAAI (2017). </li> -->
<!-- <li> Guest reviewer for AAAI 2017. </li> -->
<!-- <li> IEEE student member, AAAI student member. </li> -->
</font></p>
</ul>
</div>
</div>
</div>
<!-- <div style="clear:both;">
<p align="right"><font size="5">Last Updated on 11th Dec, 2017</a></font></p>
<p align="right"><font size="5">Published with <a href='https://pages.github.com/'>GitHub Pages</a></font></p>
</div> -->
<!-- <hr> -->
<!-- <!-- -->
<!-- <div id="clustrmaps-widget"></div><script type="text/javascript" id="clustrmaps" src="//cdn.clustrmaps.com/map_v2.js?d=IF-jAGUNTygi5pa59hxIgtJU2XqT-rGoO58Z3E1vHZk&cl=ffffff&w=a"></script> -->
<div id="clustrmaps-widget"></div><script type='text/javascript' id='clustrmaps' src='//cdn.clustrmaps.com/map_v2.js?cl=ffffff&w=300&t=m&d=IF-jAGUNTygi5pa59hxIgtJU2XqT-rGoO58Z3E1vHZk'></script>
<!-- -->
</body>
</html>