-
Notifications
You must be signed in to change notification settings - Fork 2
/
project.html
1583 lines (1214 loc) · 87.4 KB
/
project.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html lang="en">
<head>
<title>Human-Machine Interaction @ IIIT-D</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<link href="https://fonts.googleapis.com/css?family=Quicksand:300,400,500,700,900" rel="stylesheet">
<link rel="stylesheet" href="fonts/icomoon/style.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<link rel="stylesheet" href="css/bootstrap.min.css">
<link rel="stylesheet" href="css/jquery-ui.css">
<link rel="stylesheet" href="css/owl.carousel.min.css">
<link rel="stylesheet" href="css/owl.theme.default.min.css">
<link rel="stylesheet" href="css/owl.theme.default.min.css">
<link rel="stylesheet" href="css/jquery.fancybox.min.css">
<link rel="stylesheet" href="css/bootstrap-datepicker.css">
<link rel="stylesheet" href="fonts/flaticon/font/flaticon.css">
<link rel="stylesheet" href="css/aos.css">
<link rel="stylesheet" href="css/style.css">
</head>
<body data-spy="scroll" data-target=".site-navbar-target" data-offset="300">
<style>
.wrapper {
text-align: center;
}
.new_logo_des{
display: inline-block;
}
</style>
<div class="site-wrap">
<div class="site-mobile-menu site-navbar-target">
<div class="site-mobile-menu-header">
<div class="site-mobile-menu-close mt-3">
<span class="icon-close2 js-menu-toggle"></span>
</div>
</div>
<div class="site-mobile-menu-body"></div>
</div>
<div class="border-bottom top-bar py-2 bg-dark" id="home-section">
<div class="container">
<div class="row">
<div class="col-md-6">
<p class="mb-0">
<span class="mr-3"><strong class="text-white">Phone:</strong> <a href="tel://#">+91-11-26907523</a></span>
<span><strong class="text-white">Email:</strong> <a href="#">hmi(at)iiitd(dot)ac(dot)in</a></span>
</p>
</div>
<div class="col-md-6">
<ul class="social-media">
<li><a href="https://www.facebook.com/hmi.iiitd/" class="p-2" target="_blank"><span class="icon-facebook"></span></a></li>
<li><a href="https://twitter.com/hmi_iiitd" class="p-2" target="_blank"><span class="icon-twitter"></span></a></li>
<li><a href="https://www.instagram.com/hmi_iiitd/" class="p-2" target="_blank"><span class="icon-instagram"></span></a></li>
<li><a href="https://www.linkedin.com/company/human-machine-interaction/about/" class="p-2" target="_blank"><span class="icon-linkedin"></span></a></li>
<!-- <li><a href="#" class="p-2"><i class="fa fa-globe" aria-hidden="true"></i></a></li> -->
</ul>
</div>
</div>
</div>
</div>
<header class="site-navbar py-4 bg-white js-sticky-header site-navbar-target" role="banner">
<div class="container">
<div class="row align-items-center">
<div class="col-11 col-xl-2">
<!-- <h1 class="mb-0 site-logo"><a href="index.html" class="text-black h2 mb-0">Create<span class="text-primary">.</span> </a></h1>
--> <a class="mb-0 site-logo" href="index.html"><img class="new_logo_des" src="images/logo/logo_hit.jpg" id="header-logo" class="img-fluid" ></a>
</div>
<div class="col-12 col-md-10 d-none d-xl-block">
<nav class="site-navigation position-relative text-right" role="navigation">
<ul class="site-menu main-menu js-clone-nav mr-auto d-none d-lg-block">
<li><a href="index.html" class="nav-link">Home</a></li>
<li class="has-children">
<a href="project.html" class="nav-link">Research</a>
<ul class="dropdown">
<li><a href="project.html#projects-section">Projects</a></li>
<li><a class="nav-link" href="publication.html">Publications</a></li>
</ul>
</li>
<li class="has-children">
<a href="team.html" class="nav-link">Our Team</a>
<ul class="dropdown">
<li><a href="team.html#director-section">Director</a></li>
<li><a href="team.html#members-section">Members</a></li>
<li><a href="team.html#alumni-section">Alumni</a></li>
<li><a href="team.html#collaborator-section">Collaborators</a></li>
</ul>
</li>
<li class="has-children">
<a href="resources.html" class="nav-link">Resources</a>
<ul class="dropdown">
<li><a href="faq.html" class="nav-link">FAQ</a></li>
<li><a href="blog.html" class="nav-link">Blogs</a></li>
<li><a href="resources.html#internat-res-section">Internal Resources</a></li>
<li><a href="resources.html#external-res-section">External Resources</a></li>
</ul>
</li>
<li><a href="updates.html" class="nav-link">Updates</a></li>
<!-- <li><a href="#-section" class="nav-link">Contact Us</a></li> -->
<li class="has-children">
<a href="contact_us.html" class="nav-link">Let's Connect</a>
<ul class="dropdown">
<li><a href="join_us.html">Join Us</a></li>
<li><a href="contact_us.html">Contact Us</a></li>
</ul>
</li>
</ul>
</nav>
</div>
<div class="d-inline-block d-xl-none ml-md-0 mr-auto py-3" style="position: relative; top: 3px;"><a href="#" class="site-menu-toggle js-menu-toggle text-black"><span class="icon-menu h3"></span></a></div>
</div>
</div>
</header>
<div class="site-blocks-cover overlay" style="background-image: url(images/header/project_3.jpg);" data-aos="fade" data-stellar-background-ratio="0.5">
<!-- Source of Image : https://upload.wikimedia.org/wikipedia/commons/d/d7/Research_Scene_Vector.svg -->
<div class="container">
<div class="row align-items-center justify-content-center text-center">
<div class="col-md-12" data-aos="fade-up" data-aos-delay="400">
<div class="row justify-content-center mb-4">
<div class="col-md-12 text-center">
<h1><strong>Projects</strong></h1>
<!-- <p class="lead mb-5">Free Web Template by <a href="#" target="_blank">Colorlib</a></p> -->
<!-- <div><a data-fancybox data-ratio="2" href="https://vimeo.com/317571768" class="btn btn-primary btn-md">Watch Video</a></div> -->
</div>
</div>
</div>
</div>
</div>
</div>
<section class="site-section" id="projects-section">
<div class="container">
<div class="row mb-5 justify-content-center">
<div class="col-md-8 text-center">
<h2 class="text-black h1 site-section-heading text-center">Projects</h2>
<p class="lead"></p>
</div>
</div>
<div class="row mb-5 justify-content-center">
<div id="myBtnContainer">
<button class="btn_pr active" onclick="filterSelection('all')">Show all</button>
<button class="btn_pr" onclick="filterSelection('UbiquitousComputing')">Ubiquitous Computing</button>
<button class="btn_pr" onclick="filterSelection('hri')">HRI</button>
<button class="btn_pr" onclick="filterSelection('affcom')">Affective Computing</button>
</div>
</div>
</div>
<div class="container-fluid">
<div class="row">
<!-- <div class="filterDiv A"> -->
<div class="col-md-6 col-lg-4 filterDiv UbiquitousComputing">
<!-- <a class="media-1" data-fancybox data-src="#Driver-VR-content" href="javascript:;"> -->
<a class="media-1" href="himanshu_proj.html" target="_blank">
<img src="images/project_thumb_2/p8.png" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 2px solid #000000;">
<div class="media-1-content">
<!-- <h2>Analyzing the Cognitive Driving Behavior of Indian Drivers in Virtual Environment</h2> -->
<!-- <span class="category">Virtual Reality, Affective Computing</span> -->
</div>
</a>
</div>
<!-- </div> -->
<!-- <div class="filterDiv A"> -->
<div class="col-md-6 col-lg-4 filterDiv affcom">
<!-- <a class="media-1" data-fancybox data-src="#emotion-in-movie-content" href="javascript:;"> -->
<a class="media-1" href="aarushi_proj.html" target="_blank">
<img src="images/project_thumb_2/p9.png" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 2px solid #000000;">
<div class="media-1-content">
<!-- <h2>Recognizing Induced Emotions of Movie Audiences: Are Induced and Perceived Emotions the Same?</h2> -->
<!-- <span class="category">Affective Computing</span> -->
</div>
</a>
</div>
<!-- </div> -->
<!-- <div class="filterDiv A"> -->
<div class="col-md-6 col-lg-4 filterDiv hri">
<!-- <a class="media-1" data-fancybox data-src="#social-robot-primary-edu-content" href="div_proj_template.html"> -->
<a class="media-1" href="div_proj_template.html" target="_blank">
<img src="images/project_thumb_2/p10.png" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 2px solid #000000;">
<div class="media-1-content">
<!-- <h2>Social Robot for Primary Education</h2> -->
<!-- <span class="category">Accepted at: HRI'20</span> -->
</div>
</a>
</div>
<!-- </div> -->
<!-- <div class="filterDiv A"> -->
<div class="col-md-6 col-lg-4 filterDiv affcom">
<!-- <a class="media-1" data-fancybox data-src="#Vyaktitv-content" href="javascript:;"> -->
<a class="media-1" href="vyaktitv_shahid.html" target="_blank">
<img src="images/project_thumb_2/p2.png" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 2px solid #000000;">
<div class="media-1-content">
<!-- <h2>Vyaktitv: Multimodal Personality Assesment Peer-to-Peer Hindi Conversation</h2> -->
<!-- <span class="category">Speech Processing,Affective Computing</span> -->
</div>
</a>
</div>
<!-- </div> -->
<!-- <div class="filterDiv A"> -->
<div class="col-md-6 col-lg-4 filterDiv Ubiquit">
<!-- <a class="media-1" data-fancybox data-src="#adhd-proj-content" href="javascript:;"> -->
<a class="media-1" href="adhd_proj.html" target="_blank">
<img src="images/project_thumb_2/p1.png" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 3px solid #000000;">
<div class="media-1-content">
<!-- <h2>Engagement Analysis on Children with ADHD</h2> -->
<!-- <span class="category">Affective Computing</span> -->
</div>
</a>
</div>
<!-- </div> -->
<!-- <div class="filterDiv B"> -->
<div class="col-md-6 col-lg-4 filterDiv hri">
<!-- <a class="media-1" data-fancybox data-src="#hri-asd-content" href="javascript:;"> -->
<a class="media-1" href="hri_asd.html" target="_blank">
<img src="images/project_thumb_2/p11.png" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 2px solid #000000;">
<div class="media-1-content">
<!-- <h2>HRI for ASD Diagnosis</h2> -->
<!-- <span class="category">Affective Computing, Human-Robot Interaction</span> -->
</div>
</a>
</div>
<div class="col-md-6 col-lg-4 filterDiv hri">
<!-- <a class="media-1" data-fancybox data-src="#hri-content" href="javascript:;"> -->
<a class="media-1" href="hri_content.html" target="_blank">
<img src="images/project_thumb_2/p12.jpg" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 2px solid #000000;">
<div class="media-1-content">
<!-- <h2>HRI in public settings in India</h2> -->
<!-- <span class="category">Human-Robot Interaction</span> -->
</div>
</a>
</div>
<div class="col-md-6 col-lg-4 filterDiv UbiquitousComputing">
<!-- <a class="media-1" data-fancybox data-src="#twitter-content" href="javascript:;"> -->
<a class="media-1" href="twitter_content.html" target="_blank">
<img src="images/project_thumb_2/p3.png" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 2px solid #000000;">
<div class="media-1-content">
<!-- <h2>An Investigation of Fortune 100 Companies: Insights to improve Twitter Engagement</h2> -->
<!-- <span class="category">Ubiquitous Computing</span> -->
</div>
</a>
</div>
<div class="col-md-6 col-lg-4 filterDiv affcom">
<!-- <a class="media-1" data-fancybox data-src="#depfuse-content" href="javascript:;"> -->
<a class="media-1" href="depfuse_content.html" target="_blank">
<img src="images/project_thumb_2/p4.png" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 2px solid #000000;">
<div class="media-1-content">
<!-- <h2>DepFuseNet: Depression Recognition from Audio-Visual Features Using Multi-Modal Fusion</h2> -->
<!-- <span class="category">Multimodal Computing</span> -->
</div>
</a>
</div>
<div class="col-md-6 col-lg-4 filterDiv affcom">
<!-- <a class="media-1" data-fancybox data-src="#mihir-content" href="javascript:;"> -->
<a class="media-1" href="mihir_content.html" target="_blank">
<img src="images/project_thumb_2/p5.png" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 2px solid #000000;">
<div class="media-1-content">
<!-- <h2>Feature Selection for Clickbait Detection</h2> -->
<!-- <span class="category">Deep Learning</span> -->
</div>
</a>
</div>
<div class="col-md-6 col-lg-4 filterDiv hri">
<!-- <a class="media-1" data-fancybox data-src="#anmol-mihir-content" href="javascript:;"> -->
<a class="media-1" href="anmol_mihir_content.html" target="_blank">
<img src="images/project_thumb_2/p6.jpg" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 2px solid #000000;">
<div class="media-1-content">
<!-- <h2>Smart Human Robot Interaction</h2> -->
<!-- <span class="category">Human Robot Interaction</span> -->
</div>
</a>
</div>
<div class="col-md-6 col-lg-4 filterDiv UbiquitousComputing">
<!-- <a class="media-1" data-fancybox data-src="#anmol-eye-content" href="javascript:;"> -->
<a class="media-1" href="anmol_eye.html" target="_blank">
<img src="images/project_thumb_2/p7.png" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 2px solid #000000;">
<div class="media-1-content">
<!-- <h2>Engagement Evaluation in MOOC Environments using Gaze Estimation</h2> -->
<!-- <span class="category">Affective Computing</span> -->
</div>
</a>
</div>
<div class="col-md-6 col-lg-4 filterDiv affcom">
<!-- <a class="media-1" data-fancybox data-src="#shubhangi-ashwini-content" href="javascript:;"> -->
<a class="media-1" href="shubhangi_proj.html" target="_blank">
<img src="images/project_thumb_2/p14.png" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 2px solid #000000;">
<div class="media-1-content">
<!-- <h2>Facial Actions for Artificial Agents</h2> -->
<!-- <span class="category">Human Robot Interaction</span> -->
</div>
</a>
</div>
<div class="col-md-6 col-lg-4 filterDiv UbiquitousComputing">
<!-- <a class="media-1" data-fancybox data-src="#chirag-content" href="javascript:;"> -->
<a class="media-1" href="chirag_content.html" target="_blank">
<img src="images/project_thumb_2/p15.png" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 2px solid #000000;">
<div class="media-1-content">
<!-- <h2>Parrot: Picture-Based App for Verbal Communication</h2> -->
<!-- <span class="category">Ubiquitous Computing</span> -->
</div>
</a>
</div>
<div class="col-md-6 col-lg-4 filterDiv affcom">
<!-- <a class="media-1" data-fancybox data-src="#devashi-content" href="javascript:;"> -->
<a class="media-1" href="devashi_content.html" target="_blank" >
<img src="images/project_thumb_2/p16.png" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 2px solid #000000;">
<div class="media-1-content">
<!-- <h2>Feature Extraction and Feature Selection for Emotion Recognition using Facial Expression</h2> -->
<!-- <span class="category">Affective Computing</span> -->
</div>
</a>
</div>
<div class="col-md-6 col-lg-4 filterDiv affcom">
<!-- <a class="media-1" data-fancybox data-src="#devashi-content" href="javascript:;"> -->
<a class="media-1" href="anxiety_in_adults.html" target="_blank" >
<img src="images/project_thumb_2/p13.png" alt="Image" class="img-fluid" height="683.76" width="683.76" style="padding: 2px; border: 2px solid #000000;">
<div class="media-1-content">
<!-- <h2>Predicting Anxiety in Adults Using Physiological Signals</h2> -->
<!-- <span class="category">Affective Computing</span> -->
</div>
</a>
</div>
<!-- </div> -->
<div style="display: none;" id="hri-asd-content">
<div class="container">
<div class="row justify-content-center mb-5">
<div class="text-center pb-1">
<h3 class="text-black h1 site-section-heading">HRI for early ASD Diagnosis in Children</h3>
<img src="images/resized_image/shahid_proj.jpg" alt="Image" class="img-fluid rounded" >
</div>
</div>
<div class="row mb-5">
<div class="col-md-12 order-md-1" data-aos="fade">
<div class="col-17 mb-4">
<p class="text-primary lead" > <strong>Objective</strong></p>
<p class="lead" style="text-align:justify;">Autism is a developmental disorder under the broad spectrum of pervasive developmental disorders characterized by impairment in social interaction, communication skills, and repetitive and restricted behavior of the individual. The children diagnosed with ASD specifically benefit from early interventions, ideally between the ages one to five, as these interventions are designed to take advantage of the learning potential that the brain of a young child possesses. Hence early diagnosis of ASD is crucial in the development of these children. There is no biological markers for ASD and the diagnosis of ASD relies heavily on the behavioural observations made by the expert clinicians combined with assessments based on parent responses on the behavioural history of the child since. The intricacies of the diagnostic procedure often led to misjudgment of the condition. Hence, robot-assisted diagnosis systems can be employed to improve the early detection of ASD in an automated assessment manner. We aim to create a model that can track child behaviour, quantify their state and assist the therapist in the diagnosis of the child. Multi-modal data collection and analysis will be performed for the child behaviour assessment and diagnosis. Our aim is to replace the labour intensive diagnostic procedure with a more objective and effective one with the help of social robots.</p>
<br>
<p class="text-primary lead" > <strong>Team members</strong></p>
<p class="lead" style="text-align:justify;">
Ashwini B<br>
Ananya Bhatia
</p>
<br>
<p class="text-primary lead" > <strong>Team member contributions</strong></p>
<p class="lead" style="text-align:justify;">
<h6 class="text-primary lead">Ashwini B</h6>
<ul>
<li>Task design for Diagnosis</li>
<li>Data Collection for multimodal analysis</li>
<li>Facial Emotion Recognition</li>
</ul>
<h6 class="text-primary lead">Ananya Bhatia</h6>
<ul>
<li>Gaze Detection and Model development for diagnostic task administration</li>
</ul>
</p>
<br>
<!-- <p class="text-primary lead" > <strong>Publication</strong></p>
<p class="lead" style="text-align:justify;">Kumar Singh, D., Sharma, S., Shukla, J., & Eden, G. (2020, March).<a href="https://dl.acm.org/doi/abs/10.1145/3371382.3378315"><strong> <i> Toy, Tutor, Peer, or Pet? Preliminary Findings from Child-Robot Interactions in a Community School.</i></strong></a> In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 325-327).<a href="https://www.youtube.com/watch?v=hMyGznAFYMk"><i><strong>[Video]</strong></i></a></p>
-->
</div>
</div>
</div>
</div>
</div>
<div style="display: none;" id="hri-content">
<div class="container">
<div class="row justify-content-center mb-5">
<div class="text-center pb-1">
<h3 class="text-black h1 site-section-heading">HRI in Public Settings</h3>
<img src="images/hri_proj.png" alt="Image" class="img-fluid rounded" >
</div>
</div>
<div class="row mb-5">
<div class="col-md-12 order-md-1" data-aos="fade">
<div class="col-17 mb-4">
<p class="text-primary lead" > <strong>Objective</strong></p>
<p class="lead" style="text-align:justify;">There have been several studies in the field of Human-Robot Interaction, specifically regarding introduction of social humanoid robots in public places for various applications and purposes. These robots are used in research, education, and healthcare all over the world. In order to serve these different purposes, robot require advanced dialog and interaction capabilities to help its users. This project aims to study the social dimension of human interaction with social robots. To the best of my knowledge, previous studies have not taken the Indian population into context. So, the social dimension will be studied and explored within the Indian population. Through the results obtained I also aim to make use of this study to develop better HRI for a wide range of applications. I hope this research can serve as the ground truth, can be used for reference and as pointers for guidelines to design interactions which much more closely mimic human expectations.
</p>
<br>
<!-- <p class="text-primary lead" > <strong>Publication</strong></p>
<p class="lead" style="text-align:justify;">Kumar Singh, D., Sharma, S., Shukla, J., & Eden, G. (2020, March).<a href="https://dl.acm.org/doi/abs/10.1145/3371382.3378315"><strong> <i> Toy, Tutor, Peer, or Pet? Preliminary Findings from Child-Robot Interactions in a Community School.</i></strong></a> In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 325-327).<a href="https://www.youtube.com/watch?v=hMyGznAFYMk"><i><strong>[Video]</strong></i></a></p>
-->
</div>
</div>
</div>
</div>
</div>
<div style="display: none;" id="twitter-content">
<div class="container">
<div class="row justify-content-center mb-5">
<div class="text-center pb-1">
<h3 class="text-black h1 site-section-heading">Best Practices for Better Twitter Engagement: What small and medium-sized enterprises can learn from Top Fortune 100 companies</h3>
<img src="images/tanya_kapur_proj.png" alt="Image" class="img-fluid rounded" >
</div>
</div>
<div class="row mb-5">
<div class="col-md-12 order-md-1" data-aos="fade">
<div class="col-17 mb-4">
<p class="text-primary lead" > <strong>Objective</strong></p>
<p class="lead" style="text-align:justify;">There have been several studies in the field of Human-Robot Interaction, specifically regarding introduction of social humanoid robots in public places for various applications and purposes. These robots are used in research, education, and healthcare all over the world. In order to serve these different purposes, robot require advanced dialog and interaction capabilities to help its users. This project aims to study the social dimension of human interaction with social robots. To the best of my knowledge, previous studies have not taken the Indian population into context. So, the social dimension will be studied and explored within the Indian population. Through the results obtained I also aim to make use of this study to develop better HRI for a wide range of applications. I hope this research can serve as the ground truth, can be used for reference and as pointers for guidelines to design interactions which much more closely mimic human expectations.
</p>
<br>
<p class="text-primary lead" > <strong>Team members</strong></p>
<p class="lead" style="text-align:justify;">
Tanya Kapur<br>
Anvit Sachadev
</p>
<br>
<p class="text-primary lead" > <strong>Team member contributions</strong></p>
<p class="lead" style="text-align:justify;">
<h6 class="text-primary lead">Tanya Kapur</h6>
<ul>
<li>Scripts for data analysis</li>
<li>Analysis of the data</li>
<li>Coming up with useful insights and recommendations</li>
<li>Overall Project Management</li>
<li>Research Paper Writing</li>
</ul>
<h6 class="text-primary lead">Tanya Kapur</h6>
<ul>
<li>Data Collection scripts</li>
<li>Collecting Data</li>
<li>Data pre-processing</li>
<!-- <li>Overall Project Management</li> -->
<!-- <li>Research Paper Writing</li> -->
</ul>
</p>
<br>
<!-- <p class="text-primary lead" > <strong>Publication</strong></p>
<p class="lead" style="text-align:justify;">Kumar Singh, D., Sharma, S., Shukla, J., & Eden, G. (2020, March).<a href="https://dl.acm.org/doi/abs/10.1145/3371382.3378315"><strong> <i> Toy, Tutor, Peer, or Pet? Preliminary Findings from Child-Robot Interactions in a Community School.</i></strong></a> In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 325-327).<a href="https://www.youtube.com/watch?v=hMyGznAFYMk"><i><strong>[Video]</strong></i></a></p>
-->
</div>
</div>
</div>
</div>
</div>
<div style="display: none;" id="Vyaktitv-content">
<div class="container">
<div class="row justify-content-center mb-5">
<div class="text-center pb-1">
<h3 class="text-black h1 site-section-heading">Vyaktitv: Multimodal Personality Assessment in Peer-to-Peer Hindi Conversations</h3>
<img src="images/resized_image/shahid_proj.jpg" alt="Image" class="img-fluid rounded" >
</div>
</div>
<div class="row mb-5">
<div class="col-md-12 order-md-1" data-aos="fade">
<div class="col-17 mb-4">
<p class="text-primary lead" > <strong>Objective</strong></p>
<p class="lead" style="text-align:justify;">Automatically detecting and analyzing personality traits can aid in several other applications in domains like mental health recognition and human resource management. The primary limitation of a wide range of techniques used for personality prediction so far is that they analyze these traits for each individual in isolation. In contrast, personality is intimately linked with our social behavior. In this work,we conduct an experiment where the subjects participated in peer-to-peer conversations in Hindi. To the best of our knowledge, no work has been done on analyzing personality in such a setting.Our contributions include the first peer-to-peer Hindi conversation-based dataset for personality prediction, Vyaktitv, which consists of high-quality audio and video recordings of the participants, along with Hinglish-based textual transcriptions for each conversation. The dataset also contains a rich set of socio-demographic features, including gender, age, income, and several others, for all the participants. We release the dataset for public use, along with a preliminary multimodal analysis of the Big Five personality traits based on audio, video, and linguistic features.</p>
<br>
<p class="text-primary lead" > <strong>Student Collaborator</strong></p> <p class="lead" style="text-align: justify;"> Shahid Nawaz Khan (IIIT Delhi)</p>
<p class="text-primary lead" > <strong>Faculty Collaborator</strong></p> <p class="lead" style="text-align: justify;"> Dr. Rajiv Ratn Shah (IIIT Delhi)</p>
<!-- <p class="text-primary lead" > <strong>Publication</strong></p>
<p class="lead" style="text-align:justify;">Kumar Singh, D., Sharma, S., Shukla, J., & Eden, G. (2020, March).<a href="https://dl.acm.org/doi/abs/10.1145/3371382.3378315"><strong> <i> Toy, Tutor, Peer, or Pet? Preliminary Findings from Child-Robot Interactions in a Community School.</i></strong></a> In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 325-327).<a href="https://www.youtube.com/watch?v=hMyGznAFYMk"><i><strong>[Video]</strong></i></a></p>
-->
</div>
</div>
</div>
</div>
</div>
<div style="display: none;" id="social-robot-primary-edu-content">
<div class="container">
<div class="row justify-content-center mb-5">
<div class="text-center pb-1">
<h3 class="text-black h1 site-section-heading">Social Robot for Primary Education</h3>
<img src="images/div_proj_1.png" alt="Image" class="img-fluid rounded" >
</div>
</div>
<div class="row mb-5">
<div class="col-md-12 order-md-1" data-aos="fade">
<div class="col-17 mb-4">
<p class="text-primary lead" > <strong>Abstract</strong></p>
<p class="lead" style="text-align:justify;">Research focused upon Child-Robot Interaction shows that robots in the classroom can support diverse learning goals amongst pre-school children. However, studies with children and robots in the Global South are currently limited. To address this gap, we conducted a study with children aged 4-8 years at a community school in New Delhi, India, to understand their interaction and experiences with a social robot. The children were asked to teach the English alphabet to a Cozmo robot using flash cards. Preliminary findings suggest that the children orient to the robot in a variety of ways including as a toy or pet. These orientations need to be explored further within the context of the Global South.</p>
<br>
<p class="text-primary lead" > <strong>Publication</strong></p>
<p class="lead" style="text-align:justify;">Kumar Singh, D., Sharma, S., Shukla, J., & Eden, G. (2020, March).<a href="https://dl.acm.org/doi/abs/10.1145/3371382.3378315"><strong> <i> Toy, Tutor, Peer, or Pet? Preliminary Findings from Child-Robot Interactions in a Community School.</i></strong></a> In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 325-327).<a href="https://www.youtube.com/watch?v=hMyGznAFYMk"><i><strong>[Video]</strong></i></a></p>
</div>
</div>
</div>
</div>
</div>
<div style="display: none;" id="Driver-VR-content">
<div class="container">
<div class="row justify-content-center mb-5">
<div class="text-center pb-1">
<h3 class="text-black h1 site-section-heading">Analyzing the Cognitive Driving Behavior of Indian Drivers in Virtual Environment</h3>
<img src="images/himanshu_proj_2.png" alt="Image" class="img-fluid rounded" >
</div>
</div>
<div class="row mb-5">
<div class="col-md-12 order-md-1" data-aos="fade">
<div class="col-17 mb-4">
<p class="text-primary lead" > <strong>Abstract</strong></p>
<p class="lead" style="text-align:justify;">A driver has to be very attentive while driving. He/she has to always stay calm. But during driving there can be various stressful instances which can affect the cognitive behavior of the driver and can lead to fatal accidents. In the coming age of autonomous driving the vehicles can assist the driver during these stressful conditions which can improve the driving experience. For this we need to find these stressful scenarios and analyse the cognitive behavior of the driver in these scenarios. Cognitive behavior of an individual can be analysed with the help of EEG or skin conductance signals. We analyse various parameters of EEG and skin conductance signal to states like stress, attention, drowsiness etc.</p>
<br>
<p class="text-primary lead" > <strong>Team members</strong></p>
<p class="lead" style="text-align:justify;">
Himanshu Bansal<br>
Pramil Panjawani<br>
Satvika Anand<br>
</p>
<br>
<p class="text-primary lead" > <strong>Team member contributions</strong></p>
<p class="lead" style="text-align:justify;">
<h6 class="text-primary lead">Himanshu Bansal</h6>
<ul>
<li>VR development</li>
<li>User Study</li>
<li>Compiling the results</li>
<li>EEG</li>
</ul>
<h6 class="text-primary lead">Pramil Panjawani</h6>
<ul>
<li>Skin Conductance analysis</li>
</ul>
<h6 class="text-primary lead">Satvika Anand</h6>
<ul>
<li>EEG analysis</li>
</ul>
<!-- <p class="text-primary lead" > <strong>Publication</strong></p>
<p class="lead" style="text-align:justify;">Kumar Singh, D., Sharma, S., Shukla, J., & Eden, G. (2020, March).<a href="https://dl.acm.org/doi/abs/10.1145/3371382.3378315"><strong> <i> Toy, Tutor, Peer, or Pet? Preliminary Findings from Child-Robot Interactions in a Community School.</i></strong></a> In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 325-327).<a href="https://www.youtube.com/watch?v=hMyGznAFYMk"><i><strong>[Video]</strong></i></a></p> -->
</div>
</div>
</div>
</div>
</div>
<div style="display: none;" id="depfuse-content">
<div class="container">
<div class="row justify-content-center mb-5">
<div class="text-center pb-1">
<h3 class="text-black h1 site-section-heading">DepFuseNet: Depression Recognition from Audio-Visual Features using Multi-modal Fusion</h3>
<img src="images/neha_proj.png" alt="Image" class="img-fluid rounded" >
</div>
</div>
<div class="row mb-5">
<div class="col-md-12 order-md-1" data-aos="fade">
<div class="col-17 mb-4">
<p class="text-primary lead" > <strong>Abstract</strong></p>
<p class="lead" style="text-align:justify;">Depression is a common and serious mental disorder that affects humans all over the world, therefore requires efficient assistance in its early recognition. Previous research has studied several machine learning and deep learning methods for depression recognition. However, a lack of significant efficiency and objectivity, makes these methods less utile. In this paper, we propose an efficient multimodal deep learning approach for depression recognition using audio-visual modalities. We performed feature extraction followed by feature selection on the publicly available DAIC-WOZ database, which contains the raw audios and video features for the clinical interviews of 56 patients with depression and 133 individuals without depression. We used openSmile toolkit to extract 4 different types of audio features, Chroma, PLP, MFCC, and Prosodic. Due to the privacy concerns, clinical interview videos were not available, and hence, we employed the baseline video features provided with the database, namely 2D/3D facial points, HOG, Action Units, Gaze directions, and Head Pose. We further performed feature selection on these audio-visual features to identify the most significant ones for depression recognition. Additionally, we developed a concatenated fusion-based Long Short-Term Memory (LSTM) architecture, named DepFuseNet which utilizes the identified significant features. The DepFuseNet receives insights from both the modalities and also, the handcrafted traditional features reduce the computational complexity of the DepFuseNet architecture. The results obtained from the proposed architecture DepFuseNet outperforms the state of the art method for depression recognition, which affirms the effectiveness of the proposed method for depression recognition using audio/visual modalities.</p>
<br>
<p class="text-primary lead" > <strong>Team members</strong></p>
<p class="lead" style="text-align:justify;">
Neha Goyal<br>
Devashi Choudhary<br>
</p>
<br>
<p class="text-primary lead" > <strong>Team member contributions</strong></p>
<p class="lead" style="text-align:justify;">
<h6 class="text-primary lead">Neha Goyal</h6>
<!-- <ul> -->
<!-- </ul> -->
<h6 class="text-primary lead">Devashi Choudhary</h6>
<!-- <ul> -->
<!-- <li>Skin Conductance analysis</li> -->
<!-- </ul> -->
<!-- <p class="text-primary lead" > <strong>Publication</strong></p>
<p class="lead" style="text-align:justify;">Kumar Singh, D., Sharma, S., Shukla, J., & Eden, G. (2020, March).<a href="https://dl.acm.org/doi/abs/10.1145/3371382.3378315"><strong> <i> Toy, Tutor, Peer, or Pet? Preliminary Findings from Child-Robot Interactions in a Community School.</i></strong></a> In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 325-327).<a href="https://www.youtube.com/watch?v=hMyGznAFYMk"><i><strong>[Video]</strong></i></a></p> -->
</div>
</div>
</div>
</div>
</div>
<div style="display: none;" id="mihir-content">
<div class="container">
<div class="row justify-content-center mb-5">
<div class="text-center pb-1">
<h3 class="text-black h1 site-section-heading">Feature Selection for Clickbait Detection</h3>
<img src="images/mihir_proj.png" alt="Image" class="img-fluid rounded" >
</div>
</div>
<div class="row mb-5">
<div class="col-md-12 order-md-1" data-aos="fade">
<div class="col-17 mb-4">
<p class="text-primary lead" > <strong>Abstract</strong></p>
<p class="lead" style="text-align:justify;">Clickbaits are posts that aim to exploit the natural curiosity of humans by providing incomplete or incorrect information to get users to visit the full posts that typically consist of sensationalized or misleading information. The information provided is just enough to incite their interest. Among various attempts to identify and curb the practice of clickbaits, it can be useful to identify a highly successful method. A clickbait-free environment will help users have a better time browsing the internet. We draw inspiration from the Clickbait Challenge of 2017 in which multiple approaches were proposed to detect clickbaits using machine learning and deep learning techniques. Within these approaches, we find a plethora of features that were used in the models. To the best of our knowledge, a systematic study of these features and their correlation with each other has never been done before. We aim to identify a trade-off between the number of features and the performance of the model to facilitate faster processing without losing much performance. With this knowledge, a more accurate and faster clickbait model can potentially be deployed that would help improve user experience online in real-time. It can also facilitate better research opportunities in the clickbait detection domain.</p>
<br>
<!-- <p class="text-primary lead" > <strong>Team members</strong></p> -->
<!-- <p class="lead" style="text-align:justify;"> -->
<!-- Neha Goyal<br> -->
<!-- Devashi Choudhary<br> -->
<!-- </p> -->
<!-- <br> -->
<!-- <p class="text-primary lead" > <strong>Team member contributions</strong></p> -->
<!-- <p class="lead" style="text-align:justify;"> -->
<!-- <h6 class="text-primary lead">Neha Goyal</h6> -->
<!-- <ul> -->
<!-- </ul> -->
<!-- <h6 class="text-primary lead">Devashi Choudhary</h6> -->
<!-- <ul> -->
<!-- <li>Skin Conductance analysis</li> -->
<!-- </ul> -->
<!-- <p class="text-primary lead" > <strong>Publication</strong></p>
<p class="lead" style="text-align:justify;">Kumar Singh, D., Sharma, S., Shukla, J., & Eden, G. (2020, March).<a href="https://dl.acm.org/doi/abs/10.1145/3371382.3378315"><strong> <i> Toy, Tutor, Peer, or Pet? Preliminary Findings from Child-Robot Interactions in a Community School.</i></strong></a> In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 325-327).<a href="https://www.youtube.com/watch?v=hMyGznAFYMk"><i><strong>[Video]</strong></i></a></p> -->
</div>
</div>
</div>
</div>
</div>
<div style="display: none;" id="anmol-mihir-content">
<div class="container">
<div class="row justify-content-center mb-5">
<div class="text-center pb-1">
<h3 class="text-black h1 site-section-heading">Smart Human Robot Interaction</h3>
<img src="images/anmol_mihir_proj.png" alt="Image" class="img-fluid rounded" >
</div>
</div>
<div class="row mb-5">
<div class="col-md-12 order-md-1" data-aos="fade">
<div class="col-17 mb-4">
<p class="text-primary lead" > <strong>Abstract</strong></p>
<p class="lead" style="text-align:justify;">With the world driving towards automation and technological advancements, the utilisation of service robots for smart home environments is gaining limelight. However, to be able to perform efficiently in a human-like manner, robots need to integrate tasks like object detection and localization with semantic knowledge representation and activity recognition. Therefore, researchers are increasingly working towards mapping high-level knowledge of the world to the robot’s understanding to produce the desired outcome. The goal is to make the robot capable of making intelligent decisions and carry out its job without human intervention as far as possible.
Our work involves making a robot predict activities of daily living of a person living in a smart home environment. Based on the routine of the person, the predictions made by the robot can be useful in making viable choices and adhere to his/her needs. We train an ensemble deep learning architecture to ascertain the need for some robot action and further determine the type of action required. To the best of our knowledge, predicting robot actions based on the semantic nature of the environment using Deep Learning has not been carried out before. </p>
<br>
<p class="text-primary lead" > <strong>Team members</strong></p>
<p class="lead" style="text-align:justify;">
Anmol Singhal <br>
Mihir Goyal <br>
</p>
<br>
<!-- <p class="text-primary lead" > <strong>Team member contributions</strong></p> -->
<!-- <p class="lead" style="text-align:justify;"> -->
<!-- <h6 class="text-primary lead">Neha Goyal</h6> -->
<!-- <ul> -->
<!-- </ul> -->
<!-- <h6 class="text-primary lead">Devashi Choudhary</h6> -->
<!-- <ul> -->
<!-- <li>Skin Conductance analysis</li> -->
<!-- </ul> -->
<!-- <p class="text-primary lead" > <strong>Publication</strong></p>
<p class="lead" style="text-align:justify;">Kumar Singh, D., Sharma, S., Shukla, J., & Eden, G. (2020, March).<a href="https://dl.acm.org/doi/abs/10.1145/3371382.3378315"><strong> <i> Toy, Tutor, Peer, or Pet? Preliminary Findings from Child-Robot Interactions in a Community School.</i></strong></a> In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 325-327).<a href="https://www.youtube.com/watch?v=hMyGznAFYMk"><i><strong>[Video]</strong></i></a></p> -->
</div>
</div>
</div>
</div>
</div>
<div style="display: none;" id="anmol-eye-content">
<div class="container">
<div class="row justify-content-center mb-5">
<div class="text-center pb-1">
<h3 class="text-black h1 site-section-heading">Engagement Evaluation in MOOC Environments using Gaze Estimation</h3>
<img src="images/anmol_eye_proj.png" alt="Image" class="img-fluid rounded" >
</div>
</div>
<div class="row mb-5">
<div class="col-md-12 order-md-1" data-aos="fade">
<div class="col-17 mb-4">
<p class="text-primary lead" > <strong>Abstract</strong></p>
<p class="lead" style="text-align:justify;">With the tremendous increment in reach of the internet, every aspect of communication in our lives are slowly turning into digital modes of communication. Massive Open Online Courses (MOOCs) allow the learners imbibe quality learning at their own comfort through form of digital learning. MOOCs check the quality of learning among their users by multiple subjective and objective tests that ensure quality learning by the users, but these results can often hoodwink the teachers since these quizzes can be given by other accounts and know the answers beforehand. We propose a network that requires no extra hardware, and low computation power , that can help users as well as the teacher estimate the engagement levels of the learner while consuming the resources. Our network uses features of the user's eyes as well as face, combined with the salient features extracted from the MOOC video to estimate attention. This can help learners learn more efficiently, as it can alert the user when their engagement levels are low, as well as let the teacher know so that they can improve on the sections of their lectures where the learners are losing their interests.</p>
<br>
<p class="text-primary lead" > <strong>Team members</strong></p>
<p class="lead" style="text-align:justify;">
Anmol Prasad <br>
Saksham Vohra <br>
</p>
<br>
<p class="text-primary lead" > <strong>Team member contributions</strong></p>
<p class="lead" style="text-align:justify;">
<h6 class="text-primary lead">Anmol Prasad</h6>
<ul>
<li>Gaze tracking program</li>
<li>Data collection protocol</li>
<li>Data pre processing</li>
</ul>
<h6 class="text-primary lead">Saksham Vohra</h6>
<ul>
<li>Gaze tracking program</li>
<li>Data collection protocol</li>
<li>Deep Learning architecture</li>
</ul>
<!-- <p class="text-primary lead" > <strong>Publication</strong></p>
<p class="lead" style="text-align:justify;">Kumar Singh, D., Sharma, S., Shukla, J., & Eden, G. (2020, March).<a href="https://dl.acm.org/doi/abs/10.1145/3371382.3378315"><strong> <i> Toy, Tutor, Peer, or Pet? Preliminary Findings from Child-Robot Interactions in a Community School.</i></strong></a> In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 325-327).<a href="https://www.youtube.com/watch?v=hMyGznAFYMk"><i><strong>[Video]</strong></i></a></p> -->
</div>
</div>
</div>
</div>
</div>
<div style="display: none;" id="shubhangi-ashwini-content">
<div class="container">
<div class="row justify-content-center mb-5">
<div class="text-center pb-1">
<h3 class="text-black h1 site-section-heading">Facial Actions for Artificial Agents</h3>
<img src="images/shubhangi_ashwini_proj.png" alt="Image" class="img-fluid rounded" >
</div>
</div>
<div class="row mb-5">
<div class="col-md-12 order-md-1" data-aos="fade">
<div class="col-17 mb-4">
<p class="text-primary lead" > <strong>Abstract</strong></p>
<p class="lead" style="text-align:justify;">Autism spectrum disorder (ASD) is a developmental disorder that affects communication and behavior. The diagnostic tasks are often complex and cumbersome due to the mental health conditions which children with ASD suffer from, some of which are higher levels of anxiety, depression, attention deficit hyperactivity disorder (ADHD), and disruptive behavior disorders. In this work, we investigate how emotion recognition can be leveraged to achieve sustained attention during the diagnostic tasks in children with autism. We focus on capturing the emotions of the child during diagnostic tasks and the corresponding facial signals that the agent has to display during diagnostic task administration eventually leading to the successful completion of these diagnostic tasks and better diagnosis.</p>
<br>
<p class="text-primary lead" > <strong>Team members</strong></p>
<p class="lead" style="text-align:justify;">
Shubhangi Butta <br>
Ashwini <br>
</p>
<br>
<p class="text-primary lead" > <strong>Team member contributions</strong></p>
<p class="lead" style="text-align:justify;">
<h6 class="text-primary lead">Shubhangi Butta</h6>
<ul>
<!-- <li>Gaze tracking program</li> -->
<!-- <li>Data collection protocol</li> -->
<!-- <li>Data pre processing</li> -->
</ul>
<h6 class="text-primary lead">Ashwini</h6>
<ul>
<!-- <li>Gaze tracking program</li> -->
<!-- <li>Data collection protocol</li> -->
<!-- <li>Deep Learning architecture</li> -->
</ul>
<!-- <p class="text-primary lead" > <strong>Publication</strong></p>
<p class="lead" style="text-align:justify;">Kumar Singh, D., Sharma, S., Shukla, J., & Eden, G. (2020, March).<a href="https://dl.acm.org/doi/abs/10.1145/3371382.3378315"><strong> <i> Toy, Tutor, Peer, or Pet? Preliminary Findings from Child-Robot Interactions in a Community School.</i></strong></a> In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 325-327).<a href="https://www.youtube.com/watch?v=hMyGznAFYMk"><i><strong>[Video]</strong></i></a></p> -->
</div>
</div>
</div>
</div>
</div>
<div style="display: none;" id="chirag-content">
<div class="container">
<div class="row justify-content-center mb-5">
<div class="text-center pb-1">
<h3 class="text-black h1 site-section-heading">Parrot: Picture-Based App for Verbal Communication</h3>
<img src="images/chirag_proj.png" alt="Image" class="img-fluid rounded" >
</div>
</div>
<div class="row mb-5">
<div class="col-md-12 order-md-1" data-aos="fade">
<div class="col-17 mb-4">
<p class="text-primary lead" > <strong>Abstract</strong></p>
<p class="lead" style="text-align:justify;">The aim of the project is to develop a mobile/tablet app to facilitate verbal communication for children with Autism Spectrum Disorder (ASD). The app will be based on a picture exchange communication system (PECS) and will work as a communication aid for children with ASD and for their parents and/or caregivers.</p>
<br>
<p class="text-primary lead" > <strong>Team members</strong></p>
<p class="lead" style="text-align:justify;">
Chirag Jain <br>
Bhavika Rana <br>
</p>
<br>
<p class="text-primary lead" > <strong>Team member contributions</strong></p>
<p class="lead" style="text-align:justify;">
<h6 class="text-primary lead">Chirag Jain</h6>
<ul>
<li>App Developer</li>
</ul>
<h6 class="text-primary lead">Bhavika Rana</h6>
<ul>
<li>UI/UX Designer</li>
</ul>
<!-- <p class="text-primary lead" > <strong>Publication</strong></p>
<p class="lead" style="text-align:justify;">Kumar Singh, D., Sharma, S., Shukla, J., & Eden, G. (2020, March).<a href="https://dl.acm.org/doi/abs/10.1145/3371382.3378315"><strong> <i> Toy, Tutor, Peer, or Pet? Preliminary Findings from Child-Robot Interactions in a Community School.</i></strong></a> In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 325-327).<a href="https://www.youtube.com/watch?v=hMyGznAFYMk"><i><strong>[Video]</strong></i></a></p> -->
</div>
</div>
</div>
</div>
</div>
<div style="display: none;" id="devashi-content">
<div class="container">
<div class="row justify-content-center mb-5">