-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
executable file
·732 lines (657 loc) · 46.3 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
---
title: The Variation Lab @ Harvard SEAS
permalink: /
layout: default
---
<table>
<tr>
<td class="small">
<!-- <div class="no-top-padding">
<p class="no-top-padding">
We design, build and evaluate systems for comprehending and interacting with population-level structure and trends in large code and data corpora.
These systems augment human intelligence by giving users a “useful degree of comprehension in a situation that previously was too complex.” -D. C. Engelbart
</p>
</div> -->
<!-- This year's course offering co-locates two advanced graduate seminars in HCI (CS 279r) and PL (CS 252r). From HCI, students will learn new design and evaluation methods focused on utility and usability. From PL, students will learn new PL techniques relevant to building user-centric systems.
Students enrolled in 252r will select and present papers on PL topics including type systems, program synthesis, and metaprogramming. Students enrolled in 279r will select and present systems HCI papers about communicating intent between humans and computers, such as programming by demonstration and representing transformations on large piles of data. Activities will include a small number of lectures, discussion of relevant literature in each field, and a project, in which students from 252r and 279r will work together in groups to propose and carry out research at the intersection of PL and HCI. -->
<!-- <div class="header lead">
Who is this course for?
</div>
<div>
<p>
Designed for PhD students from all areas. Masters students and advanced undergraduates are welcome, particularly those who wish to do research (or write a thesis) in an area related to Human-Computer Interaction and/or Programming Languages. Undergraduates enrolling in 279r or 252r are recommended but not required to have taken 179 or 152, respectively.
</p>
</div> -->
<!-- <div class="header lead top-padding">
We're hiring!
</div>
<p>
<span class="slightlybold">Interested in working with us?</span> Before reaching out, please follow the instructions <a href="/glassman.html">here</a>. (It involves picking a paper to read from our <a href="#publications">publications list</a> below.)
<!-- A vibrant, supportive community of undergraduates, masters, and PhD students who are <i>creating tools that augment human intelligence with computation.</i> -->
<!-- for thought and action
</p> -->
<div class="header lead top-padding">
Vision & Values
</div>
<p>
<span class="slightlybold">What we do</span><br>
AI is powerful, but it can make objective errors, generate contextually inappropriate outputs, and offer disliked options. We defined and build AI-resilient interfaces that help people use AI while being resilient to AI choices that are not right, or not right for them. This is critical during context- and preference-dominated open-ended tasks, like ideating, searching, sensemaking, and reading or writing text and code at scale. AI-resilient interfaces improve AI safety, usability, and utility by working with, not against, human perception, attention, and cognition. To achieve this, we derive design implications from cognitive science, even when they fly in the face of common usability guidelines.
<p>
<span class="slightlybold">What success looks like</span><br>
A vibrant, supportive community of undergraduates, masters, PhD students, Postdoctoral Scholars, and collaborators who are creating tools that augment human sensemaking and human-computer collaboration.
<!-- for thought and action -->
</p>
<!-- <p> -->
<span class="slightlybold">How we’re getting there</span><br>
<ul>
<!-- <li>Practicing project planning and replanning</li> -->
<!-- <li>Quickly collecting data that reduces our uncertainty about our understanding of the problem or the appropriateness of our approach</li> -->
<li>Frequent feedback from each other</li>
<!-- <li>Replanning rather than submitting unfinished, poorly thought-out, and/or hastily written work</li> -->
<!-- <li>Working to create, publish, and publicize research we’re proud of</li> -->
<!-- <ul>
<li>at top-tier HCI research conferences, i.e., CHI, UIST, IUI, and CSCW, and journals, i.e., TOCHI</li>
<li>Replanning rather than submitting unfinished, poorly thought-out, and/or hastily written work</li>
</ul> -->
<li>Reflecting on our personal and community practices</li>
<li>Getting enough sleep so we can bring our best selves to our work</li>
</ul>
<!-- </p> -->
<div class="header lead top-padding" id="publications">
Latest Publications (CHI'24, DIS'24, & ICML'24 Pre-prints)
</div>
<div>
<!-- <p>
<span class="boldish">HCI + AI</span>
</p> -->
<ul>
<li><a href="papers/gptsm.pdf"><b>An AI-Resilient Text Rendering Technique for Reading and Skimming Documents</b> [Pre-print]</a>
<br/><b>CHI</b> 2024
<br/>Ziwei Gu, Ian Arawjo, Kenneth Li, Jonathan K. Kummerfeld, Elena L. Glassman
<ul>
<li><a href="papers/gptsm_eye_lbw.pdf"><b>Why Do Skimmers Perform Better with Grammar-Preserving Text Saliency Modulation (GP-TSM)? Evidence from an Eye Tracking Study</b> [Pre-print]</a>
<br/><b>CHI Late Breaking Work</b> 2024
<br/>Ziwei Gu, Owen Raymond, Naser Al Madi, Elena L. Glassman
</li>
<li><a href="papers/AI_Resilient_Interfaces_draft.pdf"><b>AI-Resilient Interfaces (Working Draft)</b></a>
<br/>Elena L. Glassman, Ziwei Gu, Jonathan K. Kummerfeld
</li>
<li><em-demo>Try it out! <a href="https://gptsm-6b7fc3be6bdb.herokuapp.com/">GP-TSM Live Demo</a></em-demo></li>
<li><em-open>Or use it in your own system! It's open source. <a href="https://github.com/ZiweiGu/GP-TSM"><b>[Github]</b></a></em-open></li>
</ul>
<!-- <br/><span><i>Pre-recorded conference talk (10 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/gfmsKkPjTtw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span> -->
</li>
<li><a href="papers/mesotext.pdf"><b>Supporting Sensemaking of Large Language Model Outputs at Scale</b> [Pre-print]</a>
<br/><b>CHI</b> 2024 <em-honorable>Honorable Mention Award</em-honorable>
<br/>Katy Ilonka Gero, Chelse Swoopes, Ziwei Gu, Jonathan K. Kummerfeld, Elena L. Glassman
<!-- <br/><span><i>Pre-recorded conference talk (10 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/gfmsKkPjTtw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span> -->
<ul>
<li><em-demo>[Demo temporarily experiencing issues] Try it out! <a href="http://language-play.com/mesotext">Demo</a></em-demo></li>
<li><em-open>Try our novel algorithm, Positional Diction Clustering, here: <a href="https://observablehq.com/@glassmanlab/positional-diction-clustering"><b>[ObservableHQ notebook]</b></a></li>
</ul>
</li>
<li><a href="papers/chainforge.pdf"><b>ChainForge: A Visual Toolkit for Prompt Engineering and LLM Hypothesis Testing</b> [Pre-print]</a>
<br/><b>CHI</b> 2024 <em-honorable>Honorable Mention Award</em-honorable>
<br/>Ian Arawjo, Chelse Swoopes*, Priyan Vaithilingam*, Martin Wattenberg, Elena L. Glassman
<br/><i>*indicates equal contributions</i>
<ul>
<li><em-demo>Try it out! <a href="https://chainforge.ai/">Chainforge.ai</a></em-demo></li>
<li><em-open>Or build on top of it! It's open source. <a href="https://github.com/ianarawjo/ChainForge"><b>[Github]</b></a></em-open></li>
</ul>
<!-- <br/><span><i>Pre-recorded conference talk (10 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/gfmsKkPjTtw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span> -->
</li>
<li><a href="papers/dynavis.pdf"><b>DynaVis: Dynamically Synthesized UI Widgets for Visualization Editing</b> [Pre-print]</a>
<br/><b>CHI</b> 2024 <em-best>Best Paper Award</em-best>
<br/>Priyan Vaithilingam, Elena L. Glassman, Jeevana Priya Inala, Chenglong Wang
<!-- <br/><span><i>Pre-recorded conference talk (10 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/gfmsKkPjTtw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span> -->
</li>
<li><a href="papers/imagining_dis24.pdf"><b>Imagining a Future of Designing with AI: Dynamic Grounding, Constructive Negotiation, and Sustainable Motivation</b> [Pre-print]</a>
<br/><b>DIS</b> 2024
<br/>Priyan Vaithilingam*, Ian Arawjo*, Elena L. Glassman
<br/><i>*indicates equal contributions</i>
<!-- <br/><span><i>Pre-recorded conference talk (10 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/gfmsKkPjTtw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span> -->
</li>
<li><a href="papers/amortizing_icml24.pdf"><b>Amortizing Pragmatic Program Synthesis with Rankings</b> [Pre-print]</a>
<br/><b>ICML</b> 2024
<br/>Yewen Pu, Saujas Vaduguru, Priyan Vaithilingam, Elena L. Glassman, Daniel Fried
<!-- <br/><span><i>Pre-recorded conference talk (10 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/gfmsKkPjTtw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span> -->
</li>
</ul>
</div>
<div class="header lead top-padding">
Recorded Public Talks
</div>
<div>
<p>
<span class="boldish">HCI + AI</span>
</p>
<ul>
<li><b>Stanford CS Seminar On People, Computers, and Design</b> Jan 20th 2023
<br><i>Systems for Supporting Intent Formation and Human-AI Communication</i> <b>[<a href="https://youtu.be/Gvf8oQDcLsQ?si=hJXCKcAcJsEzkP6D">YouTube</a>]</b>
<!-- <br/><i>for the general public</i> -->
<!-- <br/><iframe width="238" height="134" src="https://www.youtube.com/embed/eau8H6287rk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> -->
</li>
<li><b>Radcliffe Institute Fellowship Public Lecture</b> Oct 13th 2021
<br><i>Novel Interfaces to Support Human Intent Formation and Communication to Humans and Computers Alike</i> (for the general public)
<br/><iframe width="238" height="134" src="https://www.youtube.com/embed/eau8H6287rk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</li>
</ul>
<p>
<span class="boldish">HCI + PL</span>
</p>
<ul>
<!-- <li><b>Oxford CS Seminar Talk</b> Nov 24th 2021
<br/><i>with Josh Sunshine and Sarah Chasins</i>
<br/><iframe width="238" height="134" src="https://www.youtube.com/embed/Mf6xAupxPdg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</li> -->
<li><b>ACM SIGPLAN International Conference on Functional Programming (ICFP) Keynote</b> August 24th 2021
<!-- <br/><i>ACM SIGPLAN International Conference on Functional Programming</i> -->
<br><i>Building PL-Powered Systems for Humans</i>
<iframe width="238" height="134" src="https://www.youtube.com/embed/QHqg0Xp7gzY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</li>
<li><b>ACM SIGPLAN conference on Systems, Programming, Languages, and Applications: Software for Humanity (SPLASH REBASE) Speaker</b> Nov 20th, 2020
<!-- <br/><i>ACM SIGPLAN conference on Systems, Programming, Languages, and Applications: Software for Humanity</i> -->
<br><i>PL and HCI: Better Together</i>
<br/><iframe width="238" height="134" src="https://www.youtube.com/embed/r05q5M1HtK0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</li>
</ul>
</div>
<div class="header lead top-padding" id="publications">
Additional Selected Publications
</div>
<div>
<p>
<span class="boldish">HCI + AI</span>
</p>
<ul>
<!-- <li><a href="https://arxiv.org/abs/2401.10873"><b>An AI-Resilient Text Rendering Technique for Reading and Skimming Documents</b> [Pre-print]</a> -->
<!-- <br/><b>CHI</b> 2024 [conditionally accepted] -->
<!-- <br/>Ziwei Gu, Ian Arawjo, Kenneth Li, Jonathan K. Kummerfeld, Elena L. Glassman -->
<!-- <br/><span><i>Pre-recorded conference talk (10 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/gfmsKkPjTtw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span> -->
<!-- </li> -->
<!-- <li><a href="https://arxiv.org/abs/2401.13726"><b>Supporting Sensemaking of Large Language Model Outputs at Scale</b> [Pre-print]</a> -->
<!-- <br/><b>CHI</b> 2024 [conditionally accepted] -->
<!-- <br/>Katy Ilonka Gero, Chelse Swoopes, Ziwei Gu, Jonathan K. Kummerfeld, Elena L. Glassman -->
<!-- <br/><span><i>Pre-recorded conference talk (10 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/gfmsKkPjTtw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span> -->
<!-- </li> -->
<!-- <li><a href="https://arxiv.org/abs/2309.09128"><b>ChainForge: A Visual Toolkit for Prompt Engineering and LLM Hypothesis Testing</b> [Pre-print]</a> -->
<!-- <br/><b>CHI</b> 2024 [conditionally accepted] -->
<!-- <br/>Ian Arawjo, Chelse Swoopes, Priyan Vaithilingam, Martin Wattenberg, Elena L. Glassman -->
<!-- <br/><b>Try it out! Or build on top of it! It's deployed & open source: <a href="https://chainforge.ai/">Chainforge.ai</a></b> <i><a href="https://github.com/ianarawjo/ChainForge">[Github]</a></i> -->
<!-- <br/><span><i>Pre-recorded conference talk (10 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/gfmsKkPjTtw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span> -->
<!-- </li> -->
<!-- <li><a href="https://arxiv.org/abs/2401.10880"><b>DynaVis: Dynamically Synthesized UI Widgets for Visualization Editing</b> [Pre-print]</a> -->
<!-- <br/><b>CHI</b> 2024 [conditionally accepted] -->
<!-- <br/>Priyan Vaithilingam, Elena L. Glassman, Jeevana Priya Inala, Chenglong Wang -->
<!-- <br/><span><i>Pre-recorded conference talk (10 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/gfmsKkPjTtw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span> -->
<!-- </li> -->
<li><a href="papers/patat_CHI23.pdf"><b>PaTAT: Human-AI Collaborative Qualitative Coding with Explainable Interactive Rule Synthesis</b> [PDF]</a>
<br/><b>CHI</b> 2023
<br/>Simret Araya Gebreegziabher, Zheng Zhang, Xiaohang Tang, Yihao Meng, Elena L. Glassman, and Toby Jia-Jun Li
<br/><span><i>Pre-recorded conference talk (10 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/gfmsKkPjTtw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
</li>
<li><a href="papers/Human-Computer_Interaction_and_AI_CHIcourse23.pdf"><b>Human-Computer Interaction and AI: What practitioners need to know to design and build effective AI system from a human perspective</b> [PDF]</a>
<br/>Course @ <b>CHI</b> 2023
<br/>Daniel Russell, Q. Vera Liao, Chinmay Kulkarni, Elena L. Glassman and Nikolas Martelaro
<br/><a href="https://sites.google.com/view/chi-2023-hci-and-ai-tutorial/home">Course website with videos and hand-outs</a>
</li>
<li><a href="papers/gero2022_neuripsworkshop.pdf"><b>Sensemaking Interfaces for Human Evaluation of Language Model Outputs</b> [PDF]</a>
<br/><b>Human Evaluation of Generative Models Workshop @ NeurIPS</b> 2022
<br/>Katy Ilonka Gero, Jonathan K. Kummerfeld, Elena L. Glassman
<!-- <br/><span><i>Pre-recorded conference talk (10 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/gfmsKkPjTtw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span> -->
</li>
<li><a href="papers/HRI_2023_HIRL_Workshop__Learning_about_Robot_Motions_with_Variation_Theory.pdf"><b>Varying How We Teach: Adding Contrast Helps Humans Learn about Robot Motions</b> [PDF]</a>
<br/>Human-Interactive Robot Learning (HIRL) @ <b>HRI</b> 2023
<br/>Tiffany Horter, Elena L. Glassman, Julie Shah, and Serena Booth
</li>
<li><a href="papers/elephant_in2writing_selective_summary.pdf"><b>A Selective Summary of Where to Hide a Stolen Elephant: Leaps in Creative Writing with Multimodal Machine Intelligence</b> [PDF]</a>
<br/>Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022) @ <b>ACL</b> 2022
<br/>Nikhil Singh*, Guillermo Bernal*, Daria Savchenko*, Elena L. Glassman
<br/><i>*indicates equal contributions</i>
</li>
<li><a href="papers/elephant_tochi2022.pdf"><b>Where to Hide a Stolen Elephant: Leaps in Creative Writing with Multimodal Machine Intelligence</b [PDF]></a>
<br/><b>TOCHI</b> 2022
<br/>Nikhil Singh*, Guillermo Bernal*, Daria Savchenko*, Elena L. Glassman
<br/><i>*indicates equal contributions</i>
<br/><span><i>Preview (30 sec)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/q5yFIqGkkRg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
<br/><span><i>Pre-recorded conference talk (7 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/L2mF4nErtnY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
</li>
<li><a href="papers/booth_hri2022.pdf"><b>Revisiting Human-Robot Teaching and Learning Through the Lens of Human Concept Learning Theory</b> [PDF]</a>
<br/><b>HRI</b> 2022 (ACM/IEEE International Conference on Human-Robot Interaction)
<br/>Serena Booth, Sanjana Sharma, Sarah Chung, Julie Shah, Elena L. Glassman
</li>
<li><a href="papers/epsteinEtAl_CS279_icwsm.pdf"><b>Do explanations increase the effectiveness of AI-crowd generated fake news warnings?</b> [PDF]</a>
<br/><b>ICWSM</b> 2022 (International AAAI Conference on Web and Social Media)
<br/>Ziv Epstein*, Nicolo Foppiani*, Sophie Hilgard*, Sanjana Sharma*, Elena L. Glassman, David Rand
<br/><i>*indicates equal contributions</i>
</li>
<li><a href="papers/MiC_NeurIPS21_Loopholes.pdf"><b>Loopholes: a Window into Value Alignment and the Learning of Meaning</b> [PDF]</a>
<br/><a href="https://mic-workshop.github.io/">Meaning in Context: Pragmatic Communication in Humans and Machines</a> (MiC) Workshop @ NeurIPS 2021
<br/>Sophie Bridgers, Elena L. Glassman, Laura Schulz, Tomer Ullman
</li>
<li><a href="papers/Interactive_Visual_Analytics_for_EHRs_VAHC_2021.pdf"><b>Interactive Cohort Analysis and Hypothesis Discovery by Exploring Temporal Patterns in Population-Level Health Records</b> [PDF]</a>
<!-- <br/> -->
<br/><a href="https://www.visualanalyticshealthcare.org/">12th Workshop on Visual Analytics in Healthcare</a> (VAHC) @ <b>IEEE VIS</b> 2021
<br/><em-honorable>Honorable Mention Award</em-honorable>
<br/>Tianyi Zhang, Thomas H. McCoy Jr., Roy H. Perlis, Finale Doshi-Velez, Elena L. Glassman
</li>
<li><a href="papers/examplenet_chi2021.pdf"><b>Visualizing Examples of Deep Neural Networks at Scale</b> [PDF]</a>
<!-- <br/> -->
<!-- <br/>[Video: <a href="https://www.youtube.com/watch?v=kU8cBy-0z7Q">YouTube</a> <a href="papers/examplenet_videofigure.mp4">MP4</a> <a href="papers/examplenet_videofigurecaptions.vtt">Captions</a>] -->
<!-- <br/>[Preview: <a href="https://www.youtube.com/watch?v=uUNuyliR020">YouTube</a> <a href="papers/examplenet_videopreview.mp4">MP4</a> <a href="papers/examplenet_videopreviewcaptions.vtt">Captions</a>] -->
<br/><b>CHI</b> 2021 <em-honorable>Honorable Mention Award</em-honorable>
<br/>Litao Yan, Elena L. Glassman, Tianyi Zhang
<br/><span><i>Preview (30 sec)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/uUNuyliR020" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
<br/><span><i>Conference Talk (5 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/kU8cBy-0z7Q" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
</li>
<li><a href="papers/evalGenModels_chi2021.pdf"><b>Evaluating the Interpretability of Generative Models by Interactive Reconstruction</b> [PDF]</a>
<!-- <br/> -->
<!-- <br/>[Video: <a href="https://www.youtube.com/watch?v=zbPeMT-ssXo">YouTube</a> <a href="papers/evalGen_videofigure.mp4">MP4</a>] <a href="papers/evalGen_supplementalmaterials.zip">[Supp (ZIP)]</a> -->
<br/><b>CHI</b> 2021 <em-honorable>Honorable Mention Award</em-honorable>
<br/>Andrew Slavin Ross, Nina Chen, Elisa Zhao Hang, Elena L. Glassman, Finale Doshi-Velez
<br/><span><i>Conference Talk (5 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/zbPeMT-ssXo" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
</li>
<li><a href="papers/bucinca_iui20_proxy.pdf"><b>Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating XAI Systems</b> [PDF]</a>
<!-- <br/> -->
<br/><b>IUI</b> 2020 <em-best>Best Paper Award</em-best>
<br/>Zana Bucinca*, Phoebe Lin*, Krzysztof Gajos, Elena L. Glassman <i>*indicates equal contributions</i>
</li>
</ul>
<p>
<span class="boldish">HCI + Program Synthesis</span>
</p>
<ul>
<li><a href="papers/ASSUAGE_UIST21.pdf"><b>ASSUAGE: Assembly Synthesis Using A Guided Exploration</b> [PDF]</a>
<br/><b>UIST</b> 2021
<br/>Jingmei Hu, Priyan Vaithilingam, Stephen Chong, Margo Seltzer, Elena L. Glassman
<br/><span><i>Conference Talk (5 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/tpMDSW_7yFc" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
<br/><span><i>Conference Talk (10 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/1UrRaQMOEM8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
</li>
<li><a href="papers/PLandHCI_betterTogether.pdf"><b>PL and HCI: Better Together</b> [PDF]</a> <a href="https://cacm.acm.org/magazines/2021/8/254314-pl-and-hci/fulltext">[HTML]</a>
<br/><b>CACM</b> 2021
<br/>Sarah E. Chasins, Elena L. Glassman, Joshua Sunshine
<br/><span><i>Seminar Talk (55 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/Mf6xAupxPdg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
</li>
<li><a href="papers/ips_chi2021.pdf"><b>Interpretable Program Synthesis</b> [PDF]</a>
<br/><b>CHI</b> 2021
<br/>Tianyi Zhang, Zhiyang Chen, Yuanli Zhu, Priyan Vaithilingam, Xinyu Wang, Elena L. Glassman
<br/><span><i>Preview (30 sec)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/u1QKCOyad-c" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
<br/><span><i>Conference Talk (5 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/v5scz1bzQN8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
</li>
<li><a href="papers/ips_augex_uist20.pdf"><b>Interactive Program Synthesis by Augmented Examples</b> [PDF]</a>
<br/><b>UIST</b> 2020
<br/>Tianyi Zhang, London Lowmanstone, Xinyu Wang, Elena L. Glassman
<br/><span><i>Preview (30 sec)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/ShVlU679i9s" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
<br/><span><i>Conference Talk (5 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/vosQh_Duk-E" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
</li>
<li><a href="https://arxiv.org/abs/1907.06535"><b>Characterizing Developer Use of Automatically Generated Patches</b> [PDF]</a>
<br/><b>VL/HCC</b> 2019
<br/>José Pablo Cambronero, Jiasi Shen, Jürgen Cito, Elena L. Glassman, Martin Rinard
</li>
<li><a href="papers/glassmanLatS17.pdf"><b>Writing Reusable Code Feedback at Scale with Mixed-Initiative Program Synthesis</b> [PDF]</a>
<br/><b>Learning @ Scale</b> (L@S) 2017
<br/>Andrew Head*, Elena L. Glassman*, Gustavo Soares*, Ryo Suzuki, Lucas Figueredo, Loris D'Antoni and Björn Hartmann
<br/><i>*indicates equal contributions</i>
</li>
</ul>
<p>
<span class="boldish">HCI + Software Engineering</span>
</p>
<ul>
<li><a href="papers/Vaithilingam2023ICSE_IntelliCode.PDF"><b>Towards More Effective AI-Assisted Programming: A Systematic Design Exploration to Improve Visual Studio IntelliCode’s User Experience</b> [PDF]</a>
<br/><b>International Conference on Software Engineering (ICSE) Software Engineering in Practice (SEIP) Track</b> 2023
<br/>Priyan Vaithilingam, Elena L. Glassman, Peter Groenwegen, Sumit Gulwani, Austin Z Henley, Rohan Malpani, David Pugh, Arjun Radhakrishna, Gustavo Soares, Joey Wang, and Aaron Yim
</li>
<li><a href="papers/copilot_lbw_chi22.pdf"><b>Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models</b> [PDF]</a>
<br/>Late Breaking Work @ <b>CHI</b> 2022
<br/>Priyan Vaithilingam, Tianyi Zhang, Elena L. Glassman
</li>
<li><a href="papers/paralib_uist22.pdf"><b>Concept-Annotated Examples for Library Comparison</b> [PDF]</a>
<br/><b>UIST</b> 2022
<br/>Litao Yan, Miryung Kim, Björn Hartmann, Tianyi Zhang, Elena L. Glassman
</li>
<li><a href="papers/fse2020-industry-example-generation.pdf"><b>Exempla Gratis (E.G.): Code Examples for Free</b> [PDF]</a>
<br/><b>FSE Industry Track</b> 2020
<br/>Celeste Barnaby, Koushik Sen, Tianyi Zhang, Elena L. Glassman, Satish Chandra
</li>
<li><a href="papers/Data-driven-API-CHI20.pdf"><b>Enabling Data-Driven API Design with Community Usage Data: A Need-Finding Study</b> [PDF]</a>
<br/><b>CHI</b> 2020
<br/>Tianyi Zhang, Björn Hartmann, Miryung Kim, and Elena L. Glassman
<br/><span><i>Conference Talk (13 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/RTPKQ3fAq84" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
</li>
<li><a href="papers/examplore_chi18.pdf"><b>Visualizing API Usage Examples at Scale</b> [PDF]</a>
<br/><b>CHI</b> 2018
<br/>Elena L. Glassman*, Tianyi Zhang*, Björn Hartmann, Miryung Kim <i>*indicates equal contributions</i>
<br/><span><i>Preview (30 sec)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/Y_s3kyMNSd0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
</li>
<!-- (<a href="https://eglassman.github.io/examplore/">supplemental info</a>) -->
<!-- <li><em><a href="https://andrewhead.info/assets/pdf/scooping-examples.pdf">Interactive Extraction of Examples from Existing Code</a></em>.<br/>Andrew Head, Elena Glassman, Björn Hartmann, and Marti Hearst. CHI 2018.<br/>(<a href="https://andrewhead.info/assets/pdf/scooping-examples-auxiliary-material.pdf">supplemental material</a>) (<a href="https://andrewhead.info/assets/pdf/scooping-examples-slides.pdf">slides</a>) (<a href="https://codescoop.berkeley.edu">demo</a>) (<a href="https://www.youtube.com/watch?v=RYbhnRDbvyY">video</a>) (<a href="https://www.youtube.com/watch?v=sIpSS-F1Ltg">teaser</a>)</li> -->
<li><a href="papers/uist2015-elg-foobaz.pdf"><b>Foobaz: Variable Name Feedback for Student Code at Scale</b> [PDF]</a>
<br/><b>UIST</b> 2015
<br/>Elena L. Glassman, Lyla J Fischer, Jeremy Scott, Robert C. Miller
<br/><span><i>Preview (30 sec)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/4X94_2XEsrE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
<br/><span><i>Conference Talk (16 min)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/-RAc7YXH1aI" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
</li>
<li><a href="papers/glassman-tochi.pdf"><b>OverCode: Visualizing Variation in Student Solutions to Programming
Problems at Scale</b> [PDF]</a>
<br/><b>TOCHI</b> 2015
<br/>Elena L. Glassman, Jeremy Scott, Rishabh Singh, Philip Guo, Robert C. Miller
<br/><span><i>Preview (30 sec)</i><br/><iframe width="238" height="134" src="https://www.youtube.com/embed/6ov_82nxpbQ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></span>
</li>
</ul>
<p>
<span class="boldish">HCI + Online Information</span>
</p>
<ul>
<li><a href="papers/reliability_criteria.pdf"><b>Reliability Criteria for News Websites</b> [PDF]</a>
<br/><b>TOCHI</b> 2024
<br/>Hendrik Heuer, Elena L. Glassman
</li>
<li><a href="papers/who_checklist_chi22.pdf"><b>A Comparative Evaluation of Interventions Against Misinformation: Augmenting the WHO Checklist</b> [PDF]</a>
<br/><b>CHI</b> 2022
<br/>Hendrik Heuer, Elena L. Glassman
</li>
</ul>
<p>
<span class="boldish">HCI + Reading</span>
</p>
<ul>
<li><a href="papers/accessible_text_tools_MuC23.pdf"><b>Accessible Text Tools for People with Cognitive Impairments and Non-Native Readers: Challenges and Opportunities</b> [PDF]</a>
<br/><b>Mensch und Computer</b> 2023
<br/>Hendrik Heuer, Elena L. Glassman
</li>
<li><a href="papers/heuer_CHI23LBW_accessibleTextTools.pdf"><b>Accessible Text Tools: Where They Are Needed & What They Should Look Like</b> [PDF]</a>
<br/>Late Breaking Work @ <b>CHI</b> 2023
<br/>Hendrik Heuer, Elena L. Glassman
</li>
<li><a href="papers/glassman_CJ_2020.pdf"><b>Triangulating the News: Visualizing Commonality and Variation Across Many News Stories on the Same Event</b> [PDF]</a>
<br/>Computation + Journalism Symposium 2020
<br/>Elena L. Glassman, Janet Sung, Katherine Qian, Yuri Vishnevsky, Amy Zhang
</li>
</ul>
<p>
<span class="boldish">Additional Working Drafts</span>
</p>
<ul>
<li><a href="papers/alt_CHI_Benchmarks_are_not_enough_8p.pdf">DRAFT: Contextual Evaluation of AI: a New Gold Standard [PDF]</a>
<br/>Finale Doshi-Velez* and Elena L. Glassman*
</li>
<li><a href="https://arxiv.org/pdf/2309.02257.pdf">CHI'23 Human-AI Interaction course handout: Designing Interfaces for Human-Computer Communication: An On-Going Collection of Considerations [PDF]</a>
<br/>Elena L. Glassman
</li>
</ul>
<!-- <ol>
<li><em><a href="papers/Rebecca_PLATEAU.pdf">Approaching polyglot programming: what can we learn from bilingualism studies?</a></em><br/>Rebecca Hao and Elena Glassman. PLATEAU Workshop 2019.</li>
<li><em><a href="https://andrewhead.info/assets/pdf/scooping-examples.pdf">Interactive Extraction of Examples from Existing Code</a></em>.<br/>Andrew Head, Elena Glassman, Björn Hartmann, and Marti Hearst. CHI 2018.<br/>(<a href="https://andrewhead.info/assets/pdf/scooping-examples-auxiliary-material.pdf">supplemental material</a>) (<a href="https://andrewhead.info/assets/pdf/scooping-examples-slides.pdf">slides</a>) (<a href="https://codescoop.berkeley.edu">demo</a>) (<a href="https://www.youtube.com/watch?v=RYbhnRDbvyY">video</a>) (<a href="https://www.youtube.com/watch?v=sIpSS-F1Ltg">teaser</a>)</li>
</ol> -->
<!-- <p>
<span class="boldish">HCI + Document Corpora</span>
</p>
<ol>
<li><em><a href="https://cpb-us-w2.wpmucdn.com/express.northeastern.edu/dist/d/53/files/2019/11/CJ_2020_paper_67.pdf">Triangulating the News: Visualizing Commonality and Variation Across Many News Stories on the Same Event</a></em><br/>Elena L. Glassman, Janet Sung, Katherine Qian, Yuri Vishnevsky, Amy Zhang. Computation + Journalism Symposium 2020.</li>
<li><em><a href="https://andrewhead.info/assets/pdf/scooping-examples.pdf">Interactive Extraction of Examples from Existing Code</a></em>.<br/>Andrew Head, Elena Glassman, Björn Hartmann, and Marti Hearst. CHI 2018.<br/>(<a href="https://andrewhead.info/assets/pdf/scooping-examples-auxiliary-material.pdf">supplemental material</a>) (<a href="https://andrewhead.info/assets/pdf/scooping-examples-slides.pdf">slides</a>) (<a href="https://codescoop.berkeley.edu">demo</a>) (<a href="https://www.youtube.com/watch?v=RYbhnRDbvyY">video</a>) (<a href="https://www.youtube.com/watch?v=sIpSS-F1Ltg">teaser</a>)</li>
</ol> -->
</div>
<div class="header lead top-padding" id="glassman">
About Prof. Glassman
</div>
<p class="boldish">Bio</p>
<p>Elena L. Glassman is an Assistant Professor of Computer Science at the Harvard John A. Paulson School of Engineering & Applied Sciences, specializing in human-computer interaction. From 2018-22, she was the Stanley A. Marks & William H. Marks Professor at the Radcliffe Institute for Advanced Study, and, more recently, she was named as a 2023 Sloan Research Fellow. At MIT, she earned a PhD and MEng in Electrical Engineering and Computer Science and a BS in Electrical Science and Engineering, supported by the NSF Graduate Research Fellowship and the NDSEG Graduate Fellowship. Before joining Harvard, she was a postdoctoral scholar in Electrical Engineering and Computer Science at the University of California, Berkeley, where she received the Berkeley Institute for Data Science Moore/Sloan Data Science Fellowship.</p>
<p class="boldish"><a href="Academic_CV.pdf">Curriculum Vitae</a></p>
<p class="boldish">Administrative Support</p>
<p>Mark Peterson (mpeterson@seas.harvard.edu)</p>
<p class="boldish">Archive of Faculty Application Materials</p>
<p>
<ul>
<li><a href="facultymaterials/Glassman_Research_website.pdf">Research Statement</a></li>
<li><a href="facultymaterials/Glassman_Teaching_website.pdf">Teaching Statement</a></li>
<li><a href="facultymaterials/Glassman_Diversity_website.pdf">Diversity Statement</a></li>
</ul>
</p>
</td>
<td width="5px">
</td>
<td class="small" width="250px">
<div class="box">
<div class="header lead">
Current Lab Members
</div>
<div class="margin-bottom">
<div class="slightlybold">
<a href="#glassman">Prof. Elena L. Glassman</a>
</div>
<div>
<img src="img/Glassman_200x300.jpg" alt="Elena Glassman's professional headshot" style="width:250px;">
<p>Assistant Professor of Computer Science, SEAS, specializing in human-computer interaction</p>
<!-- (2019-2022) Stanley A. Marks and William H. Marks Assistant Professor at the <b>Radcliffe Institute</b><br> -->
</div>
</div>
<div class="margin-bottom">
<div class="slightlybold">
<a href="https://www.katygero.com/">Katy Ilonka Gero, PhD</a>
</div>
<div>
Postdoctoral Scholar, who is on the faculty job market this year!
</div>
</div>
<div class="margin-bottom">
<div class="slightlybold">
<a href="https://priyan.info/">Priyan Vaithilingam</a>
</div>
<div>
PhD student, Computer Science
</div>
</div>
<div class="margin-bottom">
<div class="slightlybold">
Tyler Holloway
</div>
<div>
PhD student, Computer Science
<br>co-advised by Nada Amin (PL)
</div>
</div>
<div class="margin-bottom">
<div class="slightlybold">
<a href="https://www.ziweigu.com/">Ziwei Gu</a>
</div>
<div>
PhD student, Computer Science
</div>
</div>
<div class="margin-bottom">
<div class="slightlybold">
Chelse Swoopes
</div>
<div>
PhD student, Computer Science
</div>
</div>
<!-- <div class="margin-bottom">
<div class="slightlybold">
Litao Yan
</div>
<div>
Masters student, Computational Science & Engineering
Fellow, SEAS
</div>
</div> -->
<b>Recent Alumni</b>
<div class="margin-bottom">
<div class="slightlybold">
<a href="http://ianarawjo.com/">Prof. Ian Arawjo</a>
</div>
<div>
Postdoctoral Scholar, now Assistant Professor of Human-Computer Interaction at the University of Montréal in the Department of Computer Science and Operations Research (DIRO)
</div>
</div>
</div>
<div class="box">
<div class="header lead">
Related Courses
</div>
<div class="margin-bottom">
<div class="slightlybold">
<b>(New!)</b> CS 178: Engineering Usable Interactive Systems
</div>
<ul>
<li><i>Public course website forthcoming</i></li>
<!-- <li><a href="/cs179.html">Course website</a></li> -->
<!-- <li>Recorded interviews [<a href="https://open.spotify.com/show/6TJYwzJUYomKBSEhS58uUw">Spotify</a>] [<a href="https://podcasts.apple.com/us/podcast/cs-179-design-of-useful-and-usable-interactive-systems/id1504018804">iTunes</a>] [<a href="http://cs179.libsyn.com">Libsyn</a>] [<a href="https://cs179.libsyn.com/rss">RSS</a>]</li> -->
</ul>
</div>
<div>
<div class="slightlybold">
CS 279R
</div>
<div>
<ul>
<li><a href="http://pl-hci-seminar.seas.harvard.edu">PL/HCI Graduate Seminar</a> (2019)</li>
<li><a href="https://docs.google.com/document/d/1tR7G1ghLYpcqFj3v_E3CEBTAINIJuMTOumR1CCfykoo/edit?usp=sharing">Human-AI Interaction Graduate Seminar</a> (2020)</li>
<li><a href="https://www.dropbox.com/scl/fi/qs5wb1mithsgptf6yx7ox/Day-0-Syllabus-Design-Arguments-and-an-Activity-Dissecting-our-first-paper-together.paper?dl=0&rlkey=over9s4yngln3vva3ycmvdfad">"How to Write a UIST Paper" Graduate Seminar</a> (2022)</li>
</ul>
</div>
</div>
<div class="margin-bottom">
<div class="slightlybold">
CS 179: Design of Useful and Usable Interactive Systems
</div>
<ul>
<li><a href="/cs179.html">Course website</a></li>
<li>Recorded interviews [<a href="https://open.spotify.com/show/6TJYwzJUYomKBSEhS58uUw">Spotify</a>] [<a href="https://podcasts.apple.com/us/podcast/cs-179-design-of-useful-and-usable-interactive-systems/id1504018804">iTunes</a>] [<a href="http://cs179.libsyn.com">Libsyn</a>] [<a href="https://cs179.libsyn.com/rss">RSS</a>]</li>
</ul>
</div>
</div>
<div class="box">
<div class="header lead">
Blog posts & short video commentaries
</div>
<!-- <div class="margin-bottom">
<div class="slightlybold">
<li><a href="papers/alt_CHI_Benchmarks_are_not_enough_8p.pdf">DRAFT: Contextual Evaluation of AI: a New Gold Standard [PDF]</a>
<br/>Finale Doshi-Velez* and Elena L. Glassman*
</li>
</div>
</div> -->
<div class="margin-bottom">
<div class="slightlybold">
<a href="/blog/chi23.html">CHI'23 Trip Report</a> [Blog]
</div>
<!-- <div>
Google, author of <a href="https://mitpress.mit.edu/books/joy-search">The Joy of Search</a>
</div> -->
</div>
<div class="margin-bottom">
<div class="slightlybold">
<a href="https://youtu.be/CWosJGKiZDo">Information Scent and Interface Design</a> [YouTube, 2 min]
</div>
<!-- <div>
Google, author of <a href="https://mitpress.mit.edu/books/joy-search">The Joy of Search</a>
</div> -->
</div>
<!-- <div class="margin-bottom">
<div class="slightlybold">
Ehsan Hoque
</div>
<div>
Asaro-Biggar ('92) Family Assistant Professor of Computer Science, University of Rochester
</div>
</div> -->
</div>
<!-- <div class="header lead">
Lab Alumni
</div> -->
<!-- <div class="margin-bottom">
<div class="slightlybold">
<a href="http://web.cs.ucla.edu/~tianyi.zhang/">Tianyi Zhang</a>
</div>
<div>
Former Postdoctoral Scholar, UCLA PhD in Software Engineering '19<br>
Now Assistant Professor of Computer Science, Purdue University
</div>
</div> -->
<!-- <div class="margin-bottom">
<div class="slightlybold">
Cole Bateman
</div>
<div>
Undergraduate researcher, CS & Art, Film, and Visual Studies
</div>
</div> -->
<!-- <div class="margin-bottom">
<div class="slightlybold">
Yuri Vishnevsky
</div>
<div>
Visualization consultant
</div>
</div> -->
<!-- <div class="margin-bottom">
<div class="slightlybold">
Phoebe Lin
</div>
<div>
Masters student, Graduate School of Design
</div>
</div> -->
<!-- <div class="margin-bottom">
<div class="slightlybold">
Chenlu Wang
</div>
<div>
Masters student, Design Studies: Technology
</div>
</div> -->
<!-- <div class="margin-bottom">
<div class="slightlybold">
Sam Oh
</div>
<div>
Undergraduate researcher, CS & Philosophy
</div>
</div> -->
<!-- <div class="margin-bottom">
<div class="slightlybold">
Jake Cui
</div>
<div>
Undergraduate researcher,CS & Linguistics
</div>
</div> -->
<!-- <div class="margin-bottom">
<div class="slightlybold">
Rebecca Hao
</div>
<div>
Undergraduate researcher,CS & Linguistics
</div>
</div> -->
<!-- <div class="margin-bottom">
<div class="slightlybold">
Kathy Qian
</div>
<div>
Undergraduate researcher, CS
</div>
</div> -->
<!-- <div class="margin-bottom">
<div class="slightlybold">
Janet Sung
</div>
<div>
Masters student, Masters of Design Engineering
</div>
</div> -->
<!-- <div class="margin-bottom">
<div class="slightlybold">
Jamie Lee
</div>
<div>
Summer high school student researcher, South Korea
</div>
</div> -->
</div>
<p>When we're actively recruiting, instructions will be linked <a href="/glassman.html">here</a>.</p>
</td>
</tr>
</table>