-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
501 lines (452 loc) · 34 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
<!DOCTYPE HTML>
<html>
<head>
<title>Tanmay Agarwal</title>
<link rel="shortcut icon" type="image/jpg" href="images/icon.jpg">
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<meta name="description" content="" />
<meta name="keywords" content="" />
<!--[if lte IE 8]><script src="css/ie/html5shiv.js"></script><![endif]-->
<!-- <scirpt type="text/javascript" src="js/main.js"></scirpt> -->
<script src="js/jquery.min.js"></script>
<script src="js/jquery.scrolly.min.js"></script>
<script src="js/jquery.scrollzer.min.js"></script>
<script src="js/skel.min.js"></script>
<script src="js/skel-layers.min.js"></script>
<script src="js/init.js"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<link rel="stylesheet" href="css/bootstrap.min.css">
<script type="text/javascript" src="js/bootstrap.min.js"></script>
<noscript>
<link rel="stylesheet" href="css/skel.css" />
<link rel="stylesheet" href="css/style.css" />
<link rel="stylesheet" href="css/style-wide.css" />
</noscript>
<link href="https://fonts.googleapis.com/css?family=Source+Sans+Pro" rel="stylesheet">
<!--[if lte IE 9]><link rel="stylesheet" href="css/ie/v9.css" /><![endif]-->
<!--[if lte IE 8]><link rel="stylesheet" href="css/ie/v8.css" /><![endif]-->
<script type="text/javascript" src="js/main.js"></script>
</head>
<body>
<!-- Header -->
<div id="header" class="skel-layers-fixed">
<div class="top">
<!-- Logo -->
<div id="logo">
<p id="intro"><a href="https://agarwaltanmay.github.io/" style="text-decoration: none">Tanmay Agarwal</a></p>
<p class="modified-p">Graduate Reserach Assistant</p>
<p class="modified-p"><a href="https://www.ri.cmu.edu/">Robotics Institute</a>, <a href="https://www.cmu.edu/">Carnegie Mellon University</a></p>
<p class="modified-p">Pittsburgh, PA</p>
</div>
<!-- Nav -->
<nav id="nav">
<ul>
<li><a href="#about" id="about-link" class="skel-layers-ignoreHref"><span class="icon fa-user">About Me</span></a></li>
<li><a href="#work-ex" id="work-ex-link" class="skel-layers-ignoreHref"><span class="icon fa-list-ul">Work Experience</span></a></li>
<li><a href="#projects" id="projects-link" class="skel-layers-ignoreHref"><span class="icon fa-cubes">Projects</span></a></li>
<li><a href="https://drive.google.com/file/d/1abOsdI2XC2lNcEgCn6nQ8bzeMEZ-lJnJ/view?usp=sharing" id="cv-link" target="_blank" class="skel-layers"><span class="icon fa-download">Resume</span></a></li>
<li><a href="#contact" id="contact-link" class="skel-layers-ignoreHref"><span class="icon fa-address-card">Contact</span></a></li>
</ul>
</nav>
</div>
<div class="bottom">
<!-- Social Icons -->
<ul class="icons">
<li><a href="https://www.linkedin.com/in/tanmay-agarwal-200a43105" target="_blank" class="icon fa-linkedin"><span class="label">Linkedin</span></a></li>
<li><a href="https://github.com/agarwaltanmay" target="_blank" class="icon fa-github"><span class="label">Github</span></a></li>
</ul>
</div>
</div>
<!-- Main -->
<div id="main">
<!-- About Me -->
<section id="about" class="one dark cover">
<div class="container">
<header>
<h2>About Me</h2>
</header>
<div id="self-image"><img src="images/new_self.jpg" height="300px" width="200px"></div>
<p id="self-p">
Hey! I am Tanmay, and I am a research graduate student in the <a href="https://www.cs.cmu.edu/">School of Computer Science</a> at the <a href="https://www.ri.cmu.edu">Robotics Institute</a>, <a href="https://www.cmu.edu">Carnegie Mellon Univeristy</a> pursuing my <a href="https://www.ri.cmu.edu/education/academic-programs/master-of-science-robotics/">Master of Science in Robotics</a>, with 4 years of professional experience in Machine Learning and Computer
Vision split between academia and industry. I am fortunate to be advised by <a href="https://www.cs.cmu.edu/~schneide/">Prof. Jeff Schneider</a> and work as a member of the <a href="https://www.autonlab.org">Auton Lab</a>. I am extremely passionate about developing large-scale human-level AI applications that can perceive, reason and make
complex decisions for real-world problems in the field of Robotics, HealthCare and Finance.
<br><br>
My research lies at the intersection of Machine Learning and Computer Vision, with a focus on Reinforcement Learning to
solve complex decision making problems using visual inputs. My main goal is to learn generalizable and adaptable
behaviors for self-driving cars, by developing algorithms that learn optimal decision-making policies directly from raw
sensor data by using the paradigm of Deep Reinforcement Learning.
<!-- I am also interested in developing solutions for truly -->
<!-- off-policy learning that learn optimal policies without any interactions with the environment. -->
Besides this, I am
also working on understanding how data from multiple sensors can be leveraged to learn an ideal perception stack for
2D/3D detection, segmentation and tracking.
<br><br>
Prior to joining CMU, I have worked with <a href="https://research.samsung.com/sri-b">Samsung Research Institute Bangalore</a> as a Senior Software Engineer where I worked on developing key features for the Samsung Keyboard application, at both research and production stage, using Machine Learning and Natural Language Processing. Besides this, I have also interned at <a href="https://www.jpmorgan.com/country/IN/EN/jpmorgan">JP Morgan Chase</a> and visited <a href="http://www.barc.gov.in">Bhabha Atomic Research Centre, Mumbai</a>, as a visiting research scholar. I was also awarded the <a href="https://www.daad.de/en/">DAAD-WISE</a> Scholarship in 2015 to work with <a href="https://www.ais.uni-bonn.de/behnke/">Prof. Sven Behnke</a> on developing core vision algorithms for Humanoid Soccer bots. I completed my Bachelors’ degree in Electronics & Instrumentation from <a href="https://www.bits-pilani.ac.in">BITS Pilani, Pilani</a> in 2016.
<!-- I have worked in the areas of cognitive robotics, machine learning, computer vision and distributed computing. -->
<br><br>
My research experiences have given me the opportunity to collaborate with numerous researchers and engineers to work on challenging real-world problems which have deeply rooted my interests in the aforementioned fields and strengthened my fundamental understanding about AI and engineering. Please <a href="mailto:tanmaya@cs.cmu.edu" target="_blank">reach out</a> to me if you want to learn more about my background and experiences.
<!-- <br><br> -->
<!-- I completed my Bachelors’ degree in Electronics & Instrumentation from <a href="https://www.bits-pilani.ac.in">BITS Pilani, Pilani</a> in 2016. During my undergraduate years, I have worked in the Electronics Division of <a href="http://www.barc.gov.in">Bhabha Atomic Research Centre, Mumbai</a> and also with Risk-Finance Technology Division in <a href="https://www.jpmorgan.com/country/IN/EN/jpmorgan">JP Morgan Chase, Bangalore</a>. I was also awarded the <a href="https://www.daad.de/en/">DAAD-WISE</a> Scholarship where I worked as a research intern with <a href="https://www.ais.uni-bonn.de/behnke/">Dr. Sven Behnke</a> and actively contributed towards developing vision algorithms for Humanoid Soccer bots. -->
</p>
</div>
</section>
<!-- Portfolio -->
<section id="work-ex" class="two">
<div class="container">
<header>
<h2>Work Experience</h2>
</header>
<table>
<tr>
<td class="right-aligned-text">
<span class="bold-text">November 2018 - Present</span>
<br>
<span class="small-italics-text">Graduate Research Assistant</span>
</td>
<td class="left-aligned-text">
<span class="bold-text left-floating-text"><a href="https://www.ri.cmu.edu/">Robotics Institute, </a><a href=https://www.cmu.edu/">Carnegie Mellon University</a></span>
<span class="bold-text right-floating-text">Pittsburgh, PA</span><br>
<span class="small-italics-text"><a href="https://www.autonlab.org/">Auton Lab</a></span><br><br>
<span class="bullet-point">
Designed end-to-end control agents for autonomous cars in CARLA simulator learned using sensor and measurement data from
the environment.
</span>
<span class="bullet-point">
Implemented popular RL algorithms like A2C, DDPG, PPO and TRPO to learn efficient driving policies and compared these
algorithms to understand their sample complexity.
</span>
</td>
</tr>
<tr>
<td class="right-aligned-text">
<span class="bold-text">August 2016 - July 2018</span>
<br>
<span class="small-italics-text">Senior Software Engineer (Research)</span>
</td>
<td class="left-aligned-text">
<span class="bold-text left-floating-text"><a href="https://research.samsung.com/sri-b">Samsung Research Institute</a></span>
<span class="bold-text right-floating-text">Bangalore, India</span><br>
<span class="small-italics-text">Advanced Technology Lab - Multimedia Division</span><br><br>
<span class="bullet-point">
Developed a novel geometric algorithm for virtual keyboard that predicts a word corresponding to a swipe gesture input
by matching a sequence of points to the candidate word path.
</span>
<span class="bullet-point">
Designed a neural architecture that predicts probable words for the swipe gesture input in real-time by modeling a
uni-directional GRU-RNN architecture with Connectionist Temporal Classification (CTC) as its loss function. Proposed
framework resolves word using a weighted FST that achieves 90% accuracy in top-5 predictions and an average swipe
saving time of around 0.65s.
</span>
</td>
</tr>
<tr>
<td class="right-aligned-text">
<span class="bold-text">January 2016 – June 2016</span>
<br>
<span class="small-italics-text">Software Developer Intern</span>
</td>
<td class="left-aligned-text">
<span class="bold-text left-floating-text"><a href="https://www.jpmorgan.com/country/IN/EN/jpmorgan">JP Morgan Chase & Co.</a></span>
<span class="bold-text right-floating-text">Bangalore, India</span><br>
<span class="small-italics-text">Risk & Finance Technology Division</span><br><br>
<span class="bullet-point">
Developed a distributed computing application using Apache Spark that performs millions of computations on financial
data in real-time.
</span>
<span class="bullet-point">
Implemented a multi-variate classifier to classify risk-type using Spark MLlib and Spark core libraries.
</span>
</td>
</tr>
<tr>
<td class="right-aligned-text">
<span class="bold-text">May 2015 – July 2015</span>
<br>
<span class="small-italics-text">Visiting Research Scholar</span>
</td>
<td class="left-aligned-text">
<span class="bold-text left-floating-text"><a href="https://www.uni-bonn.de/the-university">University of Bonn, </a><a href="http://vi.cs.uni-bonn.de/en">Institute of Computer Science VI</a></span>
<span class="bold-text right-floating-text">Bonn, Germany</span><br>
<span class="small-italics-text"><a href="http://www.ais.uni-bonn.de/">Autonomous Intelligent Systems Group</a></span><br><br>
<span class="bullet-point">
Developed vision algorithms for a humanoid soccer robot to detect soccer ball and goal posts in real-time. Implemented
Histogram of Gradients (HOG) feature-based cascade classifier and augmented it with shape, histogram matching to achieve
over 90% detection accuracy.
</span>
<span class="bullet-point">
Deployed these algorithms on a robot and awarded the RoboCup Design Award for the best Humanoid Robot Design at RoboCup
2015.
</span>
</td>
</tr>
<tr>
<td class="right-aligned-text">
<span class="bold-text">Aug 2014 – Dec 2014</span>
<br>
<span class="small-italics-text">Visiting Researcher</span>
</td>
<td class="left-aligned-text">
<span class="bold-text left-floating-text"><a href="https://www.ceeri.res.in/">CSIR - Central Electronics Engineering Research Institute</a></span>
<span class="bold-text right-floating-text">Pilani, India</span><br>
<span class="small-italics-text"><a href="https://www.ceeri.res.in/departments/cyber-physical-systems/control-and-automation/">Control & Automation Group</a></span><br><br>
<span class="bullet-point">
Designed a fully autonomous mobile robot for indoor navigation using Microsoft Kinect sensor.
</span>
<span class="bullet-point">
Developed algorithms such as vision-based obstacle avoidance, visual odometry to position and orient the robot with
respect to the environmental SURF features and EKF SLAM to map its surroundings.
</span>
</td>
</tr>
<tr>
<td class="right-aligned-text">
<span class="bold-text">May 2014 – July 2014</span>
<br>
<span class="small-italics-text">Summer Intern</span>
</td>
<td class="left-aligned-text">
<span class="bold-text left-floating-text"><a href="http://www.barc.gov.in/">Bhabha Atomic Research Centre (BARC)</a></span>
<span class="bold-text right-floating-text">Mumbai, India</span><br>
<span class="small-italics-text">Electronics Division</span><br><br>
<span class="bullet-point">
Worked on designing a gas leakage alarm system using customized sensors and fabricated the underlying electronics after verifying it in CAD tools. Also interfaced it with an LCD to monitor the concentration of the gas.
</span>
</td>
</tr>
</table>
</div>
</section>
<!-- Projects -->
<section id="projects" class="three dark">
<div class="container">
<header>
<h2>Projects</h2>
</header>
<table style="width:100%">
<tr>
<td class="float-td"><img src="images/neurips.jpg" class="float-img" /></td>
<td class="float-td">
<div class="float-title">Learning to Drive using Waypoints</div>
<div class="float-advisor"><i>NeurIPS 2019 Workshop on Machine Learning for Autonomous Driving, 2019</i>
<br>Advisor: <a href="https://www.cs.cmu.edu/~schneide/">Prof. Jeff Schneider</a>, <a href="https://www.ri.cmu.edu/">Robotics Institute, CMU</a></div>
<div class="float-refs">Links: [<a href="https://drive.google.com/file/d/1F93mHecRsWU4IAPEoZPwzrMVsJX6DEIN/view?usp=sharing">paper</a> / <a href="https://drive.google.com/file/d/1sj0CXkBnuNbR0CSyL2RHwbvf7D4McKL_/view?usp=sharing">poster</a> / <a href="https://www.talkrl.com/episodes/neurips-2019-deep-rl-workshop">media</a>]</div>
<div class="float-content">Traditional autonomous vehicle pipelines are highly modularized with different subsystems for localization, perception,
actor prediction, planning, and control. Though this approach provides ease of interpretation, its generalizability to
unseen environments is limited and hand-engineering of numerous parameters is required, especially in the prediction and
planning systems. Recently, Deep Reinforcement Learning (DRL) has been shown to learn complex strategic games and
perform challenging robotic tasks, which provides an appealing framework for learning to drive. In this paper, we
propose an architecture that learns directly from semantically segmented images along with waypoint features to drive
within CARLA simulator using the Proximal Policy Optimization (PPO) algorithm. We report significant improvement in
performance on the benchmark tasks of driving straight, one turn and navigation with and without dynamic actors.</div>
</td>
</tr>
<tr>
<td class="float-td"><img src="images/mmml.jpg" class="float-img" /></td>
<td class="float-td">
<div class="float-title">Argoverse: Multi-modal Multi-task 3D Object Detection</div>
<div class="float-advisor">Advisor: <a href="https://www.cs.cmu.edu/~morency/">Prof. Louis-Philippe Morency</a>, <a href="https://www.lti.cs.cmu.edu/">Language Technologies Institute, CMU</a></div>
<!-- <div class="float-refs">Links: [<a href="paper">paper</a> / <a href="poster">poster</a> / <a href="code">code]</a></div> -->
<div class="float-content">3D object detection is one of the most pivotal tasks in autonomous driving that governs major downstream tasks like
tracking, prediction, planning, and control, critical in urban environment driving. To ensure reliability and accuracy
of such system, we propose a novel multi-modal neural architecture that incorporates ideas such as dynamic voxelization,
multi-view fusion, a cross-modal coupled fusion between RGB and LiDAR, soft attention for geometric alignment, and
multi-task losses. We observe very promising results within only 8 epochs of training over <i>25%</i> of the total
data and hope to contribute a novel architecture with strong performance.</div>
</td>
</tr>
<tr>
<td class="float-td"><img src="images/jenga.jpg" class="float-img" /></td>
<td class="float-td">
<div class="float-title">Visual Learning for Jenga Tower Stability Prediction</div>
<div class="float-advisor">Advisor: <a href="http://www.cs.cmu.edu/~abhinavg/">Prof. Abhinav Gupta</a>, <a href="https://www.ri.cmu.edu/">Robotics Institute, CMU</a></div>
<div class="float-refs">Links: [<a href="https://drive.google.com/file/d/1JLtZN4aWtWXCsiIKxFkUUIOIxlhPYIuA/view?usp=sharing">report</a> / <a href="https://github.com/agarwaltanmay/16824-project-maskrcnn">code</a>]</div>
<div class="float-content">Despite advances in robotics and vision, there are simple tasks such as object manipulation which are intuitive for
humans but machines struggle with. Further, more complex manipulation tasks such as stacking or de-stacking of
structures require not only dexterous manipulation abilities but also the capability to learn and predict stability of
the stack being constructed. In this work, we explore if this capability can be developed for a robot through visual
learning and recognition.</div>
</td>
</tr>
<tr>
<td class="float-td"><img src="images/tweet.jpg" class="float-img" /></td>
<td class="float-td">
<div class="float-title">Semi-Supervised Stance Detection in tweets</div>
<div class="float-advisor">Advisor: <a href="https://www.cs.cmu.edu/~lwehbe/">Prof. Leila Wehbe</a>, <a href="https://www.ml.cmu.edu/">Machine Learning Department, CMU</a></div>
<div class="float-refs">Links: [<a href="https://drive.google.com/file/d/1Mqk6DypNrA4FKXuNimqNPjZaU1B0VTgk/view?usp=sharing">paper</a> / <a href="https://drive.google.com/open?id=1f6RtfFRSvfNrWGqmK_P3zFvnhq6QnvbT">poster</a> / <a href="https://github.com/agarwaltanmay/10701-project">code</a>]
</div>
<div class="float-content">Implemented a heuristic-based semi-supervised learning approach, LDA2Vec (Moody CoNLL 2016) for stance detection that
learns a coherent and informed embedding comparable to Para2Vec, concurrently bolstering interpretability of topics by
creating representations similar to those in Latent Dirichlet Allocation. We conclude that adding unlabelled data vastly improves the performance of classifiers by ~6% for LDA and ~20% for
Para2Vec. Overall Para2Vec seems to perform better than the Vanilla LDA. While we are able to obtain a similar quality
of topics with LDA2Vec as compared to LDA, the generated embeddings do not reflect the expected classification quality
compared to Para2Vec.</div>
</td>
</tr>
<tr>
<td class="float-td"><img src="images/non_convex.jpg" class="float-img" /></td>
<td class="float-td">
<div class="float-title">Non-Convex Optimization for Machine Learning</div>
<div class="float-advisor">Advisor: <a href="http://www.cs.cmu.edu/~me/">Prof. Michael Erdman</a>, <a
href="https://csd.cmu.edu/">Computer Science Department, CMU</a></div>
<div class="float-refs">Links: [<a href="https://drive.google.com/open?id=1n6-9f3ioHem8aK5mTQeVFYjsDSFP0Bor">report</a> / <a
href="https://drive.google.com/open?id=1TuZA2fkCKu65GknyEx-XxRhvLbbMoiAM">slides</a>]
</div>
<div class="float-content">Many real world machine learning problems require solving a non-convex optimization problem and this has been a niche
area of research due to non-convex problems being NP-hard to solve. In this project, we explore the space of non-convex
optimization for common machine-learning problems. We motivate the non-convex problem formulations of sparse recovery
and low-rank matrix recovery with its important real world applications and share the fundamental ideas about the two
main solution approaches: <i>convex relaxation</i> and <i>direct non-convex optimization</i>. We also explain some
of the popular non-convex optimization algorithms that have been shown to be very efficient in practice and yield
provably optimal solutions in polynomial time, such as projected gradient descent and its variants. We then discuss one
such application of sparse recovery in astronomy and share results from our implementation of start-of-the-art algorithm
(Högbom’s CLEAN) to denoise noisy spatial images.</div>
</td>
</tr>
<tr>
<td class="float-td"><img src="images/alpr.jpg" class="float-img" /></td>
<td class="float-td">
<div class="float-title">Automatic License Plate Recognition</div>
<div class="float-advisor">Advisor: <a href="http://www.cs.cmu.edu/~slucey/">Prof. Simon Lucey</a>, <a
href="https://www.ri.cmu.edu/">Robotics Institute, CMU</a></div>
<div class="float-refs">Links: [<a
href="https://drive.google.com/open?id=1d6GcXXEPRgJl9NDY0bJeabTnARIjg90P">slide</a>]
</div>
<div class="float-content">Built end-to-end system for Automatic License Plate Recognition using YOLO object
detector & deep character recognition models.</div>
</td>
</tr>
<tr>
<td class="float-td"><img src="images/slam.jpg" class="float-img" /></td>
<td class="float-td">
<div class="float-title">Autonomous bot using Embedded Vision</div>
<div class="float-advisor">Advisor: <a href="https://www.ceeri.res.in/profiles/j-l-raheja/">Dr. Jagdish Lal Raheja, CEERI, Pilani</a></div>
<div class="float-refs">Links: [<a href="https://drive.google.com/open?id=1lBcE9o2hVRe74hQbhCx6VgzQKOZdPerd">report</a> / <a href="https://drive.google.com/open?id=1z0HuM6jr6tlxUjgQZ-ptU0rRxpaGBEX6">slides</a> / <a href="https://github.com/agarwaltanmay/SLAM">code</a>]
</div>
<div class="float-content">An autonomous mobile robot is a machine that navigates in an unknown and unpredictable environment. These robots cannot
always be programmed to execute predefined actions because one does not know in advance what will be the outputs of the
sensors that control the motor movements. Thus these robots implement cognitive computing that processes the dynamic
streams of the present situations and based on which decides it movement. These robots finds enormous applications in
space exploration and aerial surveillance systems. Simple applications include domestic cleaning bots, autonomous quad-
copter and lawn mower. This report illustrates a few basic techniques and algorithms that will used to implement an
Autonomous Mobile Robot that also constructs a 3D map of the environment using the Self Localization and Mapping (SLAM)
technique. The bot will ultimately should traverse between any two given points and find out the shortest possible path,
avoiding obstacles on its way. The report also presents the autonomous obstacle avoider developed so far, an important
milestone towards achieving the final bot.</div>
</td>
</tr>
<tr>
<td class="float-td"><img src="images/gesture.jpg" class="float-img" /></td>
<td class="float-td">
<div class="float-title">Control of mobile robots using Gesture Recognition</div>
<div class="float-advisor">Advisor: <a href="https://www.bits-pilani.ac.in/Pilani/surekha/profile">Prof. Surekha Bhanot</a>, <a href="https://www.bits-pilani.ac.in/pilani/ElectricalElectronicsInstrumentation/Home">EEE Department, BITS Pilani</a></div>
<div class="float-refs">Links: [<a href="https://drive.google.com/open?id=1f-RhW5iJYDNBiLic5QI_WhSxjuofJu0v">report</a> / <a href="https://drive.google.com/open?id=1pX0okFz-C5_QErwvrCZTNRCIRhGb5OYH">slides</a> / <a href="https://github.com/agarwaltanmay/gesture-recognition">code</a>]
</div>
<div class="float-content">Compared performance of conventional vision based gesture recognition algorithms like background subtraction, appearance
based models with sophisticated ML approach like Haar cascade classifier in OpenCV.</div>
</td>
</tr>
<tr>
<td class="float-td"><img src="images/binary.jpg" class="float-img" /></td>
<td class="float-td">
<div class="float-title">Binary classifiers for Medical Testing</div>
<div class="float-advisor">Advisor: <a href="https://universe.bits-pilani.ac.in/pilani/goel/profile">Prof. Navneet Goyal</a>, <a href="https://www.bits-pilani.ac.in/pilani/computerscience/ComputerScience">CSIS
Department, BITS Pilani</a></div>
<!-- <div class="float-refs">Links: [<a href="paper">paper</a> / <a href="poster">poster</a> / <a href="code">code</a>] -->
</div>
<div class="float-content">In the modern world, every hospital is well equipped with monitoring and other data collection devices, and data is
gathered and shared in large information systems. Thus extracting meaning out of this data would prove to be very
beneficial especially where there is unavailability of proper diagnosis. Our project aims to develop a classification
model out of these data sets and generate a model that can be used to predict the most helpful diagnosis for a
particular set of symptoms. We use binary classifiers like multi-layer neural networks and decision trees to analyze and evaluate detection of heart and
nephritis diseases. Achieved accuracies over 90% on data procured from UCI Machine Learning Repository.</div>
</td>
</tr>
<tr>
<td class="float-td"><img src="images/solar.jpg" class="float-img" /></td>
<td class="float-td">
<div class="float-title">Predicting harnessable solar energy using weather data</div>
<div class="float-advisor">Advisor: <a href="https://universe.bits-pilani.ac.in/pilani/goel/profile">Prof.
Navneet Goyal</a>, <a href="https://www.bits-pilani.ac.in/pilani/computerscience/ComputerScience">CSIS
Department, BITS Pilani</a></div>
<div class="float-refs">Links: [<a href="https://github.com/agarwaltanmay/prediction">code</a>]
</div>
<div class="float-content">Implemented a multi-variate regression model to estimate harnessable solar energy
after reducing the dimensionality of
the data using PCA. Also experimented with kernel SVRs to fit a regression model.
</div>
</td>
</tr>
<tr>
<td class="float-td"><img src="images/text_summary.jpg" class="float-img" /></td>
<td class="float-td">
<div class="float-title">Automatic Story Summarizer</div>
<div class="float-advisor">Advisor: <a href="https://universe.bits-pilani.ac.in/pilani/vandana/profile">Prof.
Vandana Agarwal</a>, <a href="https://www.bits-pilani.ac.in/pilani/computerscience/ComputerScience">CSIS Department, BITS Pilani</a></div>
<div class="float-refs">Links: [<a href="https://drive.google.com/open?id=1FnzX1WNKuWr6IUptNULZDPZW6vfCPkXC">report</a> / <a href="https://github.com/agarwaltanmay/text-summarizer">code</a>]
</div>
<div class="float-content">Designed a story summarizer using different information and text extraction methods. Compared results which concluded
that an ensemble of term-frequency and understanding based generation method produced more coherent summaries.
</div>
</td>
</tr>
<tr>
<td class="float-td"><img src="images/braille.jpg" class="float-img" /></td>
<td class="float-td">
<div class="float-title">Unaided Braille Encryption</div>
<!-- <div class="float-advisor">Advisor: <a href="https://universe.bits-pilani.ac.in/pilani/goel/profile">Prof.
Navneet Goyal</a>, <a href="https://www.bits-pilani.ac.in/pilani/computerscience/ComputerScience">CSIS
Department</a></div> -->
<!-- <div class="float-refs">Links: [<a href="paper">paper</a> / <a href="poster">poster</a> / <a href="code">code</a>] -->
</div>
<div class="float-content">Developed a prototype of an e-reader for blind people, that processes any form of textual documents using Optical
Character Recognition and dynamically displays the corresponding Braille text on a two-dimensional array of Braille
cells.
</div>
</td>
</tr>
</table>
<!-- <button class="custom-accordion">Gesture Recognition controlled mobile robots (Lab project under Dr. Surekha Bhanot, EEE Department, BITS Pilani)</button>
<div class="custom-panel">
<div class="custom-p">Implemented different vision based gesture recognition algorithms like the conventional background subtraction method, appearance based Kovac and HSV models, and other sophisticated ML approaches like the Haar cascade classifier in OpenCV. Designed few hand gestures and demonstrated how gesture recognition can be used to control mobile robots.</div>
</div>
<br><br>
<button class="custom-accordion">Automatic Story Summarizer (Supervisor - Prof. Vandana Agarwal, CSIS Department, BITS Pilani)</button>
<div class="custom-panel">
<div class="custom-p">Developed an application that summarizes short stories using different information and text extraction methods. Compared these results and concluded that an ensemble of term-frequency and understanding based generation produced best automated summaries.</div>
</div>
<br><br>
<button class="custom-accordion">Binary classifiers for Medical Testing (Supervisor - Dr. Navneet Goyal, CSIS Department, BITS Pilani)</button>
<div class="custom-panel">
<div class="custom-p">Analyzed different binary classifiers and experimentally validated that artificial neural network provides best results for detection of heart and nephritis diseases. Also concluded that a decision tree works best for detection of diabetes type II. These models produced accuracies over 90% and the data for this project was extracted from UCI Machine Learning Repository.</div>
</div>
<br><br>
<button class="custom-accordion">Predicting harnessable solar energy using weather data (Supervisor - Dr. Navneet Goyal, CSIS Department, BITS Pilani)</button>
<div class="custom-panel">
<div class="custom-p">Developed a multi-variate regression model to estimate the harnessable solar energy after reducing the dimensionality of the weather data using Principal Component Analysis (PCA). Also explored Support Vector Machine with kernel functions to fit a regression model. This project was inspired from AMS 2013-14 Solar Energy Prediction Contest on Kaggle.</div>
</div>
<br><br>
<button class="custom-accordion">Unaided Braille Encryption (Project presentation through BITS-ACM in Apogee, 2015)</button>
<div class="custom-panel">
<div class="custom-p">Developed a prototype of an e-reader for blind people, that processes any form of textual documents using Optical Character Recognition and dynamically displays the corresponding Braille text on a two-dimensional array of Braille cells.</div>
</div>
<br><br>
<button class="custom-accordion">Autonomous Bot using embedded vision (Lab project under Dr Ing. Jagdish Lal Raheja, CEERI Pilani)</button>
<div class="custom-panel">
<div class="custom-p">Designed an autonomous mobile bot used for navigation in indoor environments using RGBD Microsoft Kinect sensor. The bot has the capability to map its surroundings using the Simultaneous Localization and Mapping (SLAM) technique.</div>
</div> -->
</div>
</section>
<!-- Contact -->
<section id="contact" class="four">
<div class="container">
<header>
<h2>Contact</h2>
</header>
<span class="custom-contact">
To get in touch with me, please drop me a mail at: <a href="mailto:tanmaya@cs.cmu.edu" target="_blank">tanmaya[at]cs[dot]cmu[dot]edu</a> / <a href="mailto:tanmay.agrawal@hotmail.com" target="_blank">tanmay[dot]agrawal[at]hotmail[dot]com</a>
</span>
</div>
</section>
</div>
</body>
</html>