-
Notifications
You must be signed in to change notification settings - Fork 6
/
2020.05.05.txt
2019 lines (1659 loc) · 153 KB
/
2020.05.05.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
==========New Papers==========
1, TITLE: Teaching Machine Comprehension with Compositional Explanations
http://arxiv.org/abs/2005.00806
AUTHORS: Qinyuan Ye ; Xiao Huang ; Xiang Ren
HIGHLIGHT: In this paper, we focus on "teaching" machines on reading comprehension with (a small number of) natural language explanations.
2, TITLE: More on NP Versus P
http://arxiv.org/abs/2005.00809
AUTHORS: Lev Gordeev
HIGHLIGHT: We generalize a well-known result that P = NP fails for monotone polynomial circuits - more precisely, that the clique problem CLIQUE(k^4,k) is not solvable by Boolean (AND,OR)-circuits of the size polynomial in k.
3, TITLE: Knowledge Base Completion: Baseline strikes back (Again)
http://arxiv.org/abs/2005.00804
AUTHORS: Prachi Jain ; Sushant Rathi ; Mausam ; Soumen Chakrabarti
HIGHLIGHT: In this paper, we discuss how by just applying this training regime to a basic model like Complex gives near SOTA performance on all the datasets -- we call this model COMPLEX-V2.
4, TITLE: Treebank Embedding Vectors for Out-of-domain Dependency Parsing
http://arxiv.org/abs/2005.00800
AUTHORS: Joachim Wagner ; James Barry ; Jennifer Foster
COMMENTS: Camera ready for ACL 2020
HIGHLIGHT: We build on this idea by 1) introducing a method to predict a treebank vector for sentences that do not come from a treebank used in training, and 2) exploring what happens when we move away from predefined treebank embedding vectors during test time and instead devise tailored interpolations.
5, TITLE: A survey on modern trainable activation functions
http://arxiv.org/abs/2005.00817
AUTHORS: Andrea Apicella ; Francesco Donnarumma ; Francesco Isgrò ; Roberto Prevete
HIGHLIGHT: In this paper, we present a survey of these models.
6, TITLE: Social Biases in NLP Models as Barriers for Persons with Disabilities
http://arxiv.org/abs/2005.00813
AUTHORS: Ben Hutchinson ; Vinodkumar Prabhakaran ; Emily Denton ; Kellie Webster ; Yu Zhong ; Stephen Denuyl
COMMENTS: ACL 2020 short paper. 5 pages
HIGHLIGHT: In this paper, we present evidence of such undesirable biases towards mentions of disability in two different English language models: toxicity prediction and sentiment analysis.
7, TITLE: DQI: Measuring Data Quality in NLP
http://arxiv.org/abs/2005.00816
AUTHORS: Swaroop Mishra ; Anjana Arunkumar ; Bhavdeep Sachdeva ; Chris Bryan ; Chitta Baral
COMMENTS: 63 pages
HIGHLIGHT: We introduce a generic formula for Data Quality Index (DQI) to help dataset creators create datasets free of such unwanted biases.
8, TITLE: MultiQT: Multimodal Learning for Real-Time Question Tracking in Speech
http://arxiv.org/abs/2005.00812
AUTHORS: Jakob Drachmann Havtorn ; Jan Latko ; Joakim Edin ; Lasse Borgholt ; Lars Maaløe ; Lorenzo Belgrano ; Nicolai Frost Jakobsen ; Regitze Sdun ; Željko Agić
COMMENTS: Accepted at ACL 2020
HIGHLIGHT: We propose a novel multimodal approach to real-time sequence labeling in speech.
9, TITLE: Enhancing Text-based Reinforcement Learning Agents with Commonsense Knowledge
http://arxiv.org/abs/2005.00811
AUTHORS: Keerthiram Murugesan ; Mattia Atzeni ; Pushkar Shukla ; Mrinmaya Sachan ; Pavan Kapanipathi ; Kartik Talamadupula
HIGHLIGHT: In this paper, we consider the recent trend of evaluating progress on reinforcement learning technology by using text-based environments and games as evaluation environments.
10, TITLE: DroTrack: High-speed Drone-based Object Tracking Under Uncertainty
http://arxiv.org/abs/2005.00828
AUTHORS: Ali Hamdi ; Flora Salim ; Du Yong Kim
COMMENTS: 10 pages, 12 figures, FUZZ-IEEE 2020
HIGHLIGHT: We present DroTrack, a high-speed visual single-object tracking framework for drone-captured video sequences.
11, TITLE: Generalized Entropy Regularization or: There's Nothing Special about Label Smoothing
http://arxiv.org/abs/2005.00820
AUTHORS: Clara Meister ; Elizabeth Salesky ; Ryan Cotterell
COMMENTS: Published as long paper at ACL 2020
HIGHLIGHT: We introduce a parametric family of entropy regularizers, which includes label smoothing as a special case, and use it to gain a better understanding of the relationship between the entropy of a model and its performance on language generation tasks.
12, TITLE: Towards Deep Learning Methods for Quality Assessment of Computer-Generated Imagery
http://arxiv.org/abs/2005.00836
AUTHORS: Markus Utke ; Saman Zadtootaghaj ; Steven Schmidt ; Sebastian Möller
COMMENTS: 4 pages
HIGHLIGHT: In this paper, we outline our plan to build a deep learningbased quality metric for video gaming quality assessment.
13, TITLE: Sources of Transfer in Multilingual Named Entity Recognition
http://arxiv.org/abs/2005.00847
AUTHORS: David Mueller ; Nicholas Andrews ; Mark Dredze
COMMENTS: ACL 2020
HIGHLIGHT: To explain this phenomena, we explore the sources of multilingual transfer in polyglot NER models and examine the weight structure of polyglot models compared to their monolingual counterparts.
14, TITLE: Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese
http://arxiv.org/abs/2005.00842
AUTHORS: Tatsuki Kuribayashi ; Takumi Ito ; Jun Suzuki ; Kentaro Inui
COMMENTS: Accepted by ACL2020
HIGHLIGHT: In this study, we explore whether the LM-based method is valid for analyzing the word order.
15, TITLE: Derivation of a Constant Velocity Motion Model for Visual Tracking
http://arxiv.org/abs/2005.00844
AUTHORS: Nathanael L. Baisa
HIGHLIGHT: In this document, we derive the constant velocity motion model that incorporates sizes of objects that, we think, can help the new researchers to adapt to it very quickly.
16, TITLE: Decision Support for Intoxication Prediction Using Graph Convolutional Networks
http://arxiv.org/abs/2005.00840
AUTHORS: Hendrik Burwinkel ; Matthias Keicher ; David Bani-Harouni ; Tobias Zellner ; Florian Eyer ; Nassir Navab ; Seyed-Ahmad Ahmadi
COMMENTS: 10 pages, 3 figures
HIGHLIGHT: In this work, we propose a new machine learning based CADx method which fuses symptoms and meta information of the patients using graph convolutional networks.
17, TITLE: Transforming and Projecting Images into Class-conditional Generative Networks
http://arxiv.org/abs/2005.01703
AUTHORS: Minyoung Huh ; Richard Zhang ; Jun-Yan Zhu ; Sylvain Paris ; Aaron Hertzmann
HIGHLIGHT: We present a method for projecting an input image into the space of a class-conditional generative neural network.
18, TITLE: Lower Bounds for Non-Elitist Evolutionary Algorithms Via Negative Multiplicative Drift
http://arxiv.org/abs/2005.00853
AUTHORS: Benjamin Doerr
HIGHLIGHT: We propose a simple negative drift theorem for multiplicative drift scenarios and show that it simplifies many existing results.
19, TITLE: SEEK: Segmented Embedding of Knowledge Graphs
http://arxiv.org/abs/2005.00856
AUTHORS: Wentao Xu ; Shun Zheng ; Liang He ; Bin Shao ; Jian Yin ; Tie-Yan Liu
HIGHLIGHT: To mitigate this problem, we propose a lightweight modeling framework that can achieve highly competitive relational expressiveness without increasing the model complexity.
20, TITLE: ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation
http://arxiv.org/abs/2005.00850
AUTHORS: Lifu Tu ; Richard Yuanzhe Pang ; Sam Wiseman ; Kevin Gimpel
COMMENTS: ACL2020
HIGHLIGHT: We propose to train a non-autoregressive machine translation model to minimize the energy defined by a pretrained autoregressive model.
21, TITLE: A language score based output selection method for multilingual speech recognition
http://arxiv.org/abs/2005.00851
AUTHORS: Van Huy Nguyen ; Thi Quynh Khanh Dinh ; Truong Thinh Nguyen ; Dang Khoa Mac
HIGHLIGHT: For systems that can accept multilingual inputs, the popular approach is to apply a language identifier to the input then switch or configure decoders in the next step, or use one more subsequence model to select the output from a set of candidates.
22, TITLE: Computing With Words for Student Strategy Evaluation in an Examination
http://arxiv.org/abs/2005.00868
AUTHORS: Prashant K Gupta ; Pranab K. Muhuri
HIGHLIGHT: The main contribution of this paper is to illustrate the use of CWW for student strategy evaluation and present a comparison of the recommendations generated by different CWW approaches.
23, TITLE: Type-2 fuzzy reliability redundancy allocation problem and its solution using particle swarm optimization algorithm
http://arxiv.org/abs/2005.00863
AUTHORS: Zubair Ashraf ; Pranab K. Muhuri ; Q. M. Danish Lohani ; Mukul L. Roy
HIGHLIGHT: In this paper, the fuzzy multi-objective reliability redundancy allocation problem (FMORRAP) is proposed, which maximizes the system reliability while simultaneously minimizing the system cost under the type 2 fuzzy uncertainty.
24, TITLE: Predicting Performance for Natural Language Processing Tasks
http://arxiv.org/abs/2005.00870
AUTHORS: Mengzhou Xia ; Antonios Anastasopoulos ; Ruochen Xu ; Yiming Yang ; Graham Neubig
COMMENTS: Accepted at ACL'20
HIGHLIGHT: In this work, we attempt to explore the possibility of gaining plausible judgments of how well an NLP model can perform under an experimental setting, without actually training or testing the model.
25, TITLE: Single Model Ensemble using Pseudo-Tags and Distinct Vectors
http://arxiv.org/abs/2005.00879
AUTHORS: Ryosuke Kuwabara ; Jun Suzuki ; Hideki Nakayama
COMMENTS: Accepted by ACL2020
HIGHLIGHT: In this study, we propose a novel method that replicates the effects of a model ensemble with a single model.
26, TITLE: wisardpkg -- A library for WiSARD-based models
http://arxiv.org/abs/2005.00887
AUTHORS: Aluizio S. Lima Filho ; Gabriel P. Guarisa ; Leopoldo A. D. Lusquino Filho ; Luiz F. R. Oliveira ; Felipe M. G. Franca ; Priscila M. V. Lima
COMMENTS: 9 pages, 2 figures
HIGHLIGHT: In order to facilitate the production of codes using WiSARD-based models, LabZero developed an ML library C++/Python called wisardpkg.
27, TITLE: Rationalizing Medical Relation Prediction from Corpus-level Statistics
http://arxiv.org/abs/2005.00889
AUTHORS: Zhen Wang ; Jennifer Lee ; Simon Lin ; Huan Sun
COMMENTS: ACL 2020
HIGHLIGHT: Aiming to shed some light on how to rationalize medical relation prediction, we present a new interpretable framework inspired by existing theories on how human memory works, e.g., theories of recall and recognition.
28, TITLE: Improving Truthfulness of Headline Generation
http://arxiv.org/abs/2005.00882
AUTHORS: Kazuki Matsumaru ; Sho Takase ; Naoaki Okazaki
COMMENTS: Accepted to ACL 2020
HIGHLIGHT: After confirmingquite a few untruthful instances in the datasets,this study hypothesizes that removing untruth-ful instances from the supervision data mayremedy the problem of the untruthful behav-iors of the model.
29, TITLE: BeCAPTCHA-Mouse: Synthetic Mouse Trajectories and Improved Bot Detection
http://arxiv.org/abs/2005.00890
AUTHORS: Alejandro Acien ; Aythami Morales ; Julian Fierrez ; Ruben Vera-Rodriguez
HIGHLIGHT: BeCAPTCHA-Mouse: Synthetic Mouse Trajectories and Improved Bot Detection
30, TITLE: Zero-Shot Transfer Learning with Synthesized Data for Multi-Domain Dialogue State Tracking
http://arxiv.org/abs/2005.00891
AUTHORS: Giovanni Campagna ; Agata Foryciarz ; Mehrad Moradshahi ; Monica S. Lam
COMMENTS: 9 pages. To appear in ACL 2020
HIGHLIGHT: This paper proposes new zero-short transfer learning technique for dialogue state tracking where the in-domain training data are all synthesized from an abstract dialogue model and the ontology of the domain.
31, TITLE: Multiagent Value Iteration Algorithms in Dynamic Programming and Reinforcement Learning
http://arxiv.org/abs/2005.01627
AUTHORS: Dimitri Bertsekas
HIGHLIGHT: In an earlier work we introduced a policy iteration algorithm, where the policy improvement is done one-agent-at-a-time in a given order, with knowledge of the choices of the preceding agents in the order.
32, TITLE: Deep Feature Mining via Attention-based BiLSTM-GCN for Human Motor Imagery Recognition
http://arxiv.org/abs/2005.00777
AUTHORS: Yimin Hou ; Shuyue Jia ; Shu Zhang ; Xiangmin Lun ; Yan Shi ; Yang Li ; Hanrui Yang ; Rui Zeng ; Jinglei Lv
HIGHLIGHT: This paper presents a novel deep learning approach designed towards remarkably accurate and responsive motor imagery (MI) recognition based on scalp EEG.
33, TITLE: Can BERT Reason? Logically Equivalent Probes for Evaluating the Inference Capabilities of Language Models
http://arxiv.org/abs/2005.00782
AUTHORS: Pei Zhou ; Rahul Khanna ; Bill Yuchen Lin ; Daniel Ho ; Xiang Ren ; Jay Pujara
COMMENTS: 15 pages, 11 figures. Work in progress
HIGHLIGHT: In this work, we address this gap by developing a procedure that allows for the systematized probing of both PTLMs' inference abilities and robustness. Our procedure centers around the methodical creation of logically-equivalent, but syntactically-different sets of probes, of which we create a corpus of 14,400 probes coming from 60 logically-equivalent sets that can be used to probe PTLMs in three task settings.
34, TITLE: Code and Named Entity Recognition in StackOverflow
http://arxiv.org/abs/2005.01634
AUTHORS: Jeniya Tabassum ; Mounica Maddela ; Wei Xu ; Alan Ritter
COMMENTS: The Annual Meeting of the Association for Computational Linguistics (ACL 2020)
HIGHLIGHT: The code token recognizer combined with an entity segmentation model we proposed, consistently improves the performance of the named entity tagger.
35, TITLE: Measuring and Reducing Non-Multifact Reasoning in Multi-hop Question Answering
http://arxiv.org/abs/2005.00789
AUTHORS: Harsh Trivedi ; Niranjan Balasubramanian ; Tushar Khot ; Ashish Sabharwal
HIGHLIGHT: To this end, we introduce an automated sufficiency-based dataset transformation that considers all possible partitions of supporting facts, capturing disconnected reasoning.
36, TITLE: Ego-motion and Surrounding Vehicle State Estimation Using a Monocular Camera
http://arxiv.org/abs/2005.01632
AUTHORS: Jun Hayakawa ; Behzad Dariush
HIGHLIGHT: In this paper, we propose a novel machine learning method to estimate ego-motion and surrounding vehicle state using a single monocular camera.
37, TITLE: Construction and Elicitation of a Black Box Model in the Game of Bridge
http://arxiv.org/abs/2005.01633
AUTHORS: Véronique Ventos ; Daniel Braun ; Colin Deheeger ; Jean Pierre Desmoulins ; Jean Baptiste Fantun ; Swann Legras ; Alexis Rimbaud ; Céline Rouveirol ; Henry Soldano
COMMENTS: vventos@nukk.ai
HIGHLIGHT: We propose the following multi-step methodology i) Build a set of examples for the decision problem and use simulations to associate a decision to each example ii) Use supervised relational learning to build an accurate and readable model iii) Perform a joint analysis between domain experts and data scientists to improve the learning language, including the production by experts of a handmade model iv) Build a better, more readable and accurate model.
38, TITLE: The Paradigm Discovery Problem
http://arxiv.org/abs/2005.01630
AUTHORS: Alexander Erdmann ; Micha Elsner ; Shijie Wu ; Ryan Cotterell ; Nizar Habash
COMMENTS: Forthcoming at ACL 2020
HIGHLIGHT: This work treats the paradigm discovery problem (PDP), the task of learning an inflectional morphological system from unannotated sentences. Using currently available resources, we construct datasets for the task.
39, TITLE: Visually Grounded Continual Learning of Compositional Semantics
http://arxiv.org/abs/2005.00785
AUTHORS: Xisen Jin ; Junyi Du ; Xiang Ren
COMMENTS: 7 pages
HIGHLIGHT: In this paper, we propose a realistic setup by simulating children's language acquisition process.
40, TITLE: KinGDOM: Knowledge-Guided DOMain adaptation for sentiment analysis
http://arxiv.org/abs/2005.00791
AUTHORS: Deepanway Ghosal ; Devamanyu Hazarika ; Navonil Majumder ; Abhinaba Roy ; Soujanya Poria ; Rada Mihalcea
HIGHLIGHT: In this paper, we take a novel perspective on this task by exploring the role of external commonsense knowledge.
41, TITLE: A Probabilistic Generative Model for Typographical Analysis of Early Modern Printing
http://arxiv.org/abs/2005.01646
AUTHORS: Kartik Goyal ; Chris Dyer ; Christopher Warren ; Max G'Sell ; Taylor Berg-Kirkpatrick
COMMENTS: To appear at ACL 2020
HIGHLIGHT: We propose a deep and interpretable probabilistic generative model to analyze glyph shapes in printed Early Modern documents.
42, TITLE: Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
http://arxiv.org/abs/2005.01643
AUTHORS: Sergey Levine ; Aviral Kumar ; George Tucker ; Justin Fu
HIGHLIGHT: In this tutorial article, we aim to provide the reader with the conceptual tools needed to get started on research on offline reinforcement learning algorithms: reinforcement learning algorithms that utilize previously collected data, without additional online data collection.
43, TITLE: A Tale of a Probe and a Parser
http://arxiv.org/abs/2005.01641
AUTHORS: Rowan Hall Maudslay ; Josef Valvoda ; Tiago Pimentel ; Adina Williams ; Ryan Cotterell
HIGHLIGHT: To explore whether syntactic probes would do better to make use of existing techniques, we compare the structural probe to a more traditional parser with an identical lightweight parameterisation.
44, TITLE: Navigating the Landscape of Games
http://arxiv.org/abs/2005.01642
AUTHORS: Shayegan Omidshafiei ; Karl Tuyls ; Wojciech M. Czarnecki ; Francisco C. Santos ; Mark Rowland ; Jerome Connor ; Daniel Hennes ; Paul Muller ; Julien Perolat ; Bart De Vylder ; Audrunas Gruslys ; Remi Munos
HIGHLIGHT: Here, we show how network measures applied to so-called response graphs of large-scale games enable the creation of a useful landscape of games, quantifying the relationships between games of widely varying sizes, characteristics, and complexities.
45, TITLE: A Simple Language Model for Task-Oriented Dialogue
http://arxiv.org/abs/2005.00796
AUTHORS: Ehsan Hosseini-Asl ; Bryan McCann ; Chien-Sheng Wu ; Semih Yavuz ; Richard Socher
HIGHLIGHT: SimpleTOD is a simple approach to task-oriented dialogue that uses a single causal language model trained on all sub-tasks recast as a single sequence prediction problem.
46, TITLE: Words aren't enough, their order matters: On the Robustness of Grounding Visual Referring Expressions
http://arxiv.org/abs/2005.01655
AUTHORS: Arjun R Akula ; Spandana Gella ; Yaser Al-Onaizan ; Song-Chun Zhu ; Siva Reddy
COMMENTS: ACL 2020
HIGHLIGHT: Using these datasets, we empirically show that existing methods fail to exploit linguistic structure and are 12% to 23% lower in performance than the established progress for this task.
47, TITLE: Evaluating Explanation Methods for Neural Machine Translation
http://arxiv.org/abs/2005.01672
AUTHORS: Jierui Li ; Lemao Liu ; Huayang Li ; Guanlin Li ; Guoping Huang ; Shuming Shi
COMMENTS: Accepted to ACL 2020, 9 pages
HIGHLIGHT: As the exact computation for this metric is intractable, we employ an efficient approach as its approximation.
48, TITLE: What is Learned in Visually Grounded Neural Syntax Acquisition
http://arxiv.org/abs/2005.01678
AUTHORS: Noriyuki Kojima ; Hadar Averbuch-Elor ; Alexander M. Rush ; Yoav Artzi
COMMENTS: In ACL 2020
HIGHLIGHT: In this analysis, we consider the case study of the Visually Grounded Neural Syntax Learner (Shi et al., 2019), a recent approach for learning syntax from a visual training signal.
49, TITLE: Fast and Robust Unsupervised Contextual Biasing for Speech Recognition
http://arxiv.org/abs/2005.01677
AUTHORS: Young Mo Kang ; Yingbo Zhou
COMMENTS: 4 pages, 1 figure
HIGHLIGHT: Here we propose an alternative approach that does not entail explicit contextual language model.
50, TITLE: Group Equivariant Generative Adversarial Networks
http://arxiv.org/abs/2005.01683
AUTHORS: Neel Dey ; Antong Chen ; Soheil Ghafurian
HIGHLIGHT: In this work, we improve gradient feedback between generator and discriminator using an inductive symmetry prior via group-equivariant convolutional networks.
51, TITLE: Obtaining Basic Algebra Formulas with Genetic Programming and Functional Rewriting
http://arxiv.org/abs/2005.01207
AUTHORS: Edwin Camilo Cubides ; Jonatan Gomez
HIGHLIGHT: In this paper, we develop a set of genetic programming operators and an initialization population process based on concepts of functional programming rewriting for boosting inductive genetic programming.
52, TITLE: On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs
http://arxiv.org/abs/2005.01204
AUTHORS: Adina Williams ; Ryan Cotterell ; Lawrence Wolf-Sonkin ; Damián Blasi ; Hanna Wallach
COMMENTS: 17 pages, 6 figures, 4 tables, TACL(a) final submission
HIGHLIGHT: We use large-scale corpora in six different gendered languages, along with tools from NLP and information theory, to test whether there is a relationship between the grammatical genders of inanimate nouns and the adjectives used to describe those nouns.
53, TITLE: Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop Question Answering
http://arxiv.org/abs/2005.01218
AUTHORS: Vikas Yadav ; Steven Bethard ; Mihai Surdeanu
COMMENTS: Accepted at ACL 2020 as a long conference paper
HIGHLIGHT: We introduce a simple, fast, and unsupervised iterative evidence retrieval method, which relies on three ideas: (a) an unsupervised alignment approach to soft-align questions and answers with justification sentences using only GloVe embeddings, (b) an iterative process that reformulates queries focusing on terms that are not covered by existing justifications, which (c) a stopping criterion that terminates retrieval when the terms in the given question and candidate answers are covered by the retrieved justifications.
54, TITLE: How to Train Your Energy-Based Model for Regression
http://arxiv.org/abs/2005.01698
AUTHORS: Fredrik K. Gustafsson ; Martin Danelljan ; Radu Timofte ; Thomas B. Schön
COMMENTS: Code is available at https://github.com/fregu856/ebms_regression
HIGHLIGHT: To that end, we propose a simple yet highly effective extension of noise contrastive estimation, and carefully compare its performance to six popular methods from literature on the tasks of 1D regression and object detection.
55, TITLE: Robust Encodings: A Framework for Combating Adversarial Typos
http://arxiv.org/abs/2005.01229
AUTHORS: Erik Jones ; Robin Jia ; Aditi Raghunathan ; Percy Liang
COMMENTS: ACL 2020
HIGHLIGHT: In this work, we introduce robust encodings (RobEn): a simple framework that confers guaranteed robustness, without making compromises on model architecture.
56, TITLE: AIM 2019 Challenge on Video Temporal Super-Resolution: Methods and Results
http://arxiv.org/abs/2005.01233
AUTHORS: Seungjun Nah ; Sanghyun Son ; Radu Timofte ; Kyoung Mu Lee
COMMENTS: Published in ICCV 2019 Workshop (Advances in Image Manipulation)
HIGHLIGHT: This paper reviews the first AIM challenge on video temporal super-resolution (frame interpolation) with a focus on the proposed solutions and results.
57, TITLE: Visual Question Answering with Prior Class Semantics
http://arxiv.org/abs/2005.01239
AUTHORS: Violetta Shevchenko ; Damien Teney ; Anthony Dick ; Anton van den Hengel
HIGHLIGHT: We present a novel mechanism to embed prior knowledge in a model for visual question answering.
58, TITLE: One-Shot Image Classification by Learning to Restore Prototypes
http://arxiv.org/abs/2005.01234
AUTHORS: Wanqi Xue ; Wei Wang
COMMENTS: Published as a conference paper in AAAI 2020
HIGHLIGHT: In this paper, we adopt metric learning for this problem, which has been applied for few- and many-shot image classification by comparing the distance between the test image and the center of each class in the feature space.
59, TITLE: NTIRE 2020 Challenge on Image and Video Deblurring
http://arxiv.org/abs/2005.01244
AUTHORS: Seungjun Nah ; Sanghyun Son ; Radu Timofte ; Kyoung Mu Lee
COMMENTS: To be published in CVPR 2020 Workshop (New Trends in Image Restoration and Enhancement)
HIGHLIGHT: In this challenge, we present the evaluation results from 3 competition tracks as well as the proposed solutions.
60, TITLE: Generalized Reinforcement Meta Learning for Few-Shot Optimization
http://arxiv.org/abs/2005.01246
AUTHORS: Raviteja Anantha ; Stephen Pulman ; Srinivas Chappidi
COMMENTS: 10 pages, 4 figures, 4 tables, 2 algorithms, ICML conference
HIGHLIGHT: We present a generic and flexible Reinforcement Learning (RL) based meta-learning framework for the problem of few-shot learning.
61, TITLE: Noise Pollution in Hospital Readmission Prediction: Long Document Classification with Reinforcement Learning
http://arxiv.org/abs/2005.01259
AUTHORS: Liyan Xu ; Julien Hogan ; Rachel E. Patzer ; Jinho D. Choi
COMMENTS: Accepted to ACL BioNLP Workshop 2020
HIGHLIGHT: This paper presents a reinforcement learning approach to extract noise in long clinical documents for the task of readmission prediction after kidney transplant.
62, TITLE: A New Data Normalization Method to Improve Dialogue Generation by Minimizing Long Tail Effect
http://arxiv.org/abs/2005.01278
AUTHORS: Zhiqiang Zhan ; Zifeng Hou ; Yang Zhang
HIGHLIGHT: To address this issue, we analyze a large corpus from Wikipedia and propose three frequency-based data normalization methods.
63, TITLE: Improving Adversarial Text Generation by Modeling the Distant Future
http://arxiv.org/abs/2005.01279
AUTHORS: Ruiyi Zhang ; Changyou Chen ; Zhe Gan ; Wenlin Wang ; Dinghan Shen ; Guoyin Wang ; Zheng Wen ; Lawrence Carin
COMMENTS: ACL 2020. arXiv admin note: substantial text overlap with arXiv:1811.00696
HIGHLIGHT: We consider a text planning scheme and present a model-based imitation-learning approach to alleviate the aforementioned issues.
64, TITLE: WikiUMLS: Aligning UMLS to Wikipedia via Cross-lingual Neural Ranking
http://arxiv.org/abs/2005.01281
AUTHORS: Afshin Rahimi ; Timothy Baldwin ; Karin Verspoor
HIGHLIGHT: We propose a cross-lingual neural reranking model to match a UMLS concept with a Wikipedia page, which achieves a recall@1 of 71%, a substantial improvement of 20% over word- and char-level BM25, enabling manual alignment with minimal effort.
65, TITLE: Distributional Discrepancy: A Metric for Unconditional Text Generation
http://arxiv.org/abs/2005.01282
AUTHORS: Ping Cai ; Xingyuan Chen ; Peng Jin ; Hongjun Wang ; Tianrui Li
HIGHLIGHT: Thus, we propose a method to estimate DD by training a neural-network-based text classifier.
66, TITLE: Rational Solutions of First Order Algebraic Ordinary Differential Equations
http://arxiv.org/abs/2005.01289
AUTHORS: Ruyong Feng ; Shuang Feng
COMMENTS: 40 pages
HIGHLIGHT: Rational Solutions of First Order Algebraic Ordinary Differential Equations
67, TITLE: Human Strategic Steering Improves Performance of Interactive Optimization
http://arxiv.org/abs/2005.01291
AUTHORS: Fabio Colella ; Pedram Daee ; Jussi Jokinen ; Antti Oulasvirta ; Samuel Kaski
COMMENTS: 10 pages, 5 figures, The paper is published in the proceedings of UMAP 2020. Codes available at https://github.com/fcole90/interactive_bayesian_optimisation
HIGHLIGHT: The optimization is done based on earlier user's feedback (e.g. "likes" and "dislikes"), and the algorithms assume the feedback to be faithful.
68, TITLE: Clue: Cross-modal Coherence Modeling for Caption Generation
http://arxiv.org/abs/2005.00908
AUTHORS: Malihe Alikhani ; Piyush Sharma ; Shengjie Li ; Radu Soricut ; Matthew Stone
COMMENTS: Accepted as a long paper to ACL 2020
HIGHLIGHT: We introduce a new task for learning inferences in imagery and text, coherence relation prediction, and show that these coherence annotations can be exploited to learn relation classifiers as an intermediary step, and also train coherence-aware, controllable image captioning models.
69, TITLE: The ILASP system for Inductive Learning of Answer Set Programs
http://arxiv.org/abs/2005.00904
AUTHORS: Mark Law ; Alessandra Russo ; Krysia Broda
COMMENTS: Submitted to the ALP newsletter
HIGHLIGHT: Learning such expressive programs widens the applicability of ILP considerably; for example, enabling preference learning, learning common-sense knowledge, including defaults and exceptions, and learning non-deterministic theories.
70, TITLE: Examining Citations of Natural Language Processing Literature
http://arxiv.org/abs/2005.00912
AUTHORS: Saif M. Mohammad
HIGHLIGHT: The analyses presented here, and the associated dataset of NLP papers mapped to citations, have a number of uses including: understanding how the field is growing and quantifying the impact of different types of papers.
71, TITLE: Quantifying Attention Flow in Transformers
http://arxiv.org/abs/2005.00928
AUTHORS: Samira Abnar ; Willem Zuidema
HIGHLIGHT: In this paper, we consider the problem of quantifying this flow of information through self-attention.
72, TITLE: Multi-Modality Generative Adversarial Networks with Tumor Consistency Loss for Brain MR Image Synthesis
http://arxiv.org/abs/2005.00925
AUTHORS: Bingyu Xin ; Yifan Hu ; Yefeng Zheng ; Hongen Liao
COMMENTS: 5 pages, 3 figures, accepted to IEEE ISBI 2020
HIGHLIGHT: To address this problem, we propose a multi-modality generative adversarial network (MGAN) to synthesize three high-quality MR modalities (FLAIR, T1 and T1ce) from one MR modality T2 simultaneously.
73, TITLE: SAMP: Shape and Motion Priors for 4D Vehicle Reconstruction
http://arxiv.org/abs/2005.00922
AUTHORS: Francis Engelmann ; Jörg Stückler ; Bastian Leibe
HIGHLIGHT: In this paper, we propose to use 3D shape and motion priors to regularize the estimation of the trajectory and the shape of vehicles in sequences of stereo images.
74, TITLE: Improving Non-autoregressive Neural Machine Translation with Monolingual Data
http://arxiv.org/abs/2005.00932
AUTHORS: Jiawei Zhou ; Phillip Keung
COMMENTS: To appear in ACL 2020
HIGHLIGHT: Under this framework, we leverage large monolingual corpora to improve the NAR model's performance, with the goal of transferring the AR model's generalization ability while preventing overfitting.
75, TITLE: Towards Occlusion-Aware Multifocal Displays
http://arxiv.org/abs/2005.00946
AUTHORS: Jen-Hao Rick Chang ; Anat Levin ; B. V. K. Vijaya Kumar ; Aswin C. Sankaranarayanan
COMMENTS: SIGGRAPH 2020
HIGHLIGHT: This paper enables occlusion-aware multifocal displays using a novel ConeTilt operator that provides an additional degree of freedom -- tilting the light cone emitted at each pixel of the display panel.
76, TITLE: Tensor optimal transport, distance between sets of measures and tensor scaling
http://arxiv.org/abs/2005.00945
AUTHORS: Shmuel Friedland
COMMENTS: 32 pages, some of the results in arXiv:1905.11384 are repeated
HIGHLIGHT: We introduce an entropic regularization term, which gives rise to a scaling of tensors.
77, TITLE: Understanding and Improving Information Transfer in Multi-Task Learning
http://arxiv.org/abs/2005.00944
AUTHORS: Sen Wu ; Hongyang R. Zhang ; Christopher Ré
COMMENTS: Appeared in ICLR 2020
HIGHLIGHT: We study the theory of this setting on linear and ReLU-activated models.
78, TITLE: Bootstrapping Techniques for Polysynthetic Morphological Analysis
http://arxiv.org/abs/2005.00956
AUTHORS: William Lane ; Steven Bird
HIGHLIGHT: To address this challenge, we offer linguistically-informed approaches for bootstrapping a neural morphological analyzer, and demonstrate its application to Kunwinjku, a polysynthetic Australian language. We generate data from a finite state transducer to train an encoder-decoder model.
79, TITLE: On the Convergence Rate of Projected Gradient Descent for a Back-Projection based Objective
http://arxiv.org/abs/2005.00959
AUTHORS: Tom Tirer ; Raja Giryes
HIGHLIGHT: In the current paper, we examine the convergence rate of the BP objective for the projected gradient descent (PGD) algorithm and identify an inherent source for its faster convergence compared to the LS objective.
80, TITLE: Deep Generative Adversarial Residual Convolutional Networks for Real-World Super-Resolution
http://arxiv.org/abs/2005.00953
AUTHORS: Rao Muhammad Umer ; Gian Luca Foresti ; Christian Micheloni
HIGHLIGHT: To address these problems, we propose a deep Super-Resolution Residual Convolutional Generative Adversarial Network (SRResCGAN) to follow the real-world degradation settings by adversarial training the model with pixel-wise supervision in the HR domain from its generated LR counterpart.
81, TITLE: How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
http://arxiv.org/abs/2005.00955
AUTHORS: Tal Linzen
COMMENTS: ACL 2020
HIGHLIGHT: This position paper describes and critiques the Pretraining-Agnostic Identically Distributed (PAID) evaluation paradigm, which has become a central tool for measuring progress in natural language understanding.
82, TITLE: Extracting Entities and Topics from News and Connecting Criminal Records
http://arxiv.org/abs/2005.00950
AUTHORS: Quang Pham ; Marija Stanojevic ; Zoran Obradovic
COMMENTS: This is a report submitted by an undergraduate student as preliminary work on this problem
HIGHLIGHT: The goal of this paper is to summarize methodologies used in extracting entities and topics from a database of criminal records and from a database of newspapers.
83, TITLE: Towards Faithful Neural Table-to-Text Generation with Content-Matching Constraints
http://arxiv.org/abs/2005.00969
AUTHORS: Zhenyi Wang ; Xiaoyang Wang ; Bang An ; Dong Yu ; Changyou Chen
COMMENTS: Accepted at ACL2020
HIGHLIGHT: In this paper, for the first time, we propose a novel Transformer-based generation framework to achieve the goal.
84, TITLE: On the Inference Calibration of Neural Machine Translation
http://arxiv.org/abs/2005.00963
AUTHORS: Shuo Wang ; Zhaopeng Tu ; Shuming Shi ; Yang Liu
COMMENTS: Accepted by ACL2020
HIGHLIGHT: By carefully designing experiments on three language pairs, our work provides in-depth analyses of the correlation between calibration and translation performance as well as linguistic properties of miscalibration and reports a number of interesting findings that might help humans better analyze, understand and improve NMT models.
85, TITLE: Boundary-aware Context Neural Network for Medical Image Segmentation
http://arxiv.org/abs/2005.00966
AUTHORS: Ruxin Wang ; Shuyuan Chen ; Chaojie Ji ; Jianping Fan ; Ye Li
HIGHLIGHT: In this paper, we formulate a boundary-aware context neural network (BA-Net) for 2D medical image segmentation to capture richer context and preserve fine spatial information.
86, TITLE: Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
http://arxiv.org/abs/2005.00965
AUTHORS: Tianlu Wang ; Xi Victoria Lin ; Nazneen Fatema Rajani ; Bryan McCann ; Vicente Ordonez ; Caiming Xiong
COMMENTS: Accepted to ACL 2020
HIGHLIGHT: We propose a simple but effective technique, Double Hard Debias, which purifies the word embeddings against such corpus regularities prior to inferring and removing the gender subspace.
87, TITLE: Gender Gap in Natural Language Processing Research: Disparities in Authorship and Citations
http://arxiv.org/abs/2005.00962
AUTHORS: Saif M. Mohammad
HIGHLIGHT: In this work, we examine female first author percentages and the citations to their papers in Natural Language Processing.
88, TITLE: Bayesian Entailment Hypothesis: How Brains Implement Monotonic and Non-monotonic Reasoning
http://arxiv.org/abs/2005.00961
AUTHORS: Hiroyuki Kido
COMMENTS: 7 pages, 3 figures
HIGHLIGHT: In this paper, we give a Bayesian account of entailment and characterize its abstract inferential properties.
89, TITLE: How Does Selective Mechanism Improve Self-Attention Networks?
http://arxiv.org/abs/2005.00979
AUTHORS: Xinwei Geng ; Longyue Wang ; Xing Wang ; Bing Qin ; Ting Liu ; Zhaopeng Tu
COMMENTS: ACL 2020
HIGHLIGHT: In this paper, we bridge the gap by assessing the strengths of selective SANs (SSANs), which are implemented with a flexible and universal Gumbel-Softmax.
90, TITLE: Efficient Second-Order TreeCRF for Neural Dependency Parsing
http://arxiv.org/abs/2005.00975
AUTHORS: Yu Zhang ; Zhenghua Li ; Min Zhang
COMMENTS: ACL 2020
HIGHLIGHT: To address this issue, we propose an effective way to batchify the inside and Viterbi algorithms for direct large matrix operation on GPUs, and to avoid the complex outside algorithm via efficient back-propagation.
91, TITLE: Quadtree Driven Lossy Event Compression
http://arxiv.org/abs/2005.00974
AUTHORS: Srutarshi Banerjee ; Zihao W. Wang ; Henry H. Chopp ; Oliver Cossairt ; Aggelos Katsaggelos
COMMENTS: 12 pages in total
HIGHLIGHT: In this paper, we perform lossy event compression (LEC) based on a quadtree (QT) segmentation map derived from an adjacent image.
92, TITLE: A Concise yet Effective model for Non-Aligned Incomplete Multi-view and Missing Multi-label Learning
http://arxiv.org/abs/2005.00976
AUTHORS: Xiang Li ; Songcan Chen
COMMENTS: 9 pages, 4 figures
HIGHLIGHT: In this paper, our goal is to meet all of them by presenting a concise yet effective model with only one hyper-parameter in modeling under the least assumption.
93, TITLE: Unsupervised Morphological Paradigm Completion
http://arxiv.org/abs/2005.00970
AUTHORS: Huiming Jin ; Liwei Cai ; Yihui Peng ; Chen Xia ; Arya D. McCarthy ; Katharina Kann
COMMENTS: Accepted by ACL 2020
HIGHLIGHT: We propose the task of unsupervised morphological paradigm completion.
94, TITLE: Using Artificial Intelligence to Analyze Fashion Trends
http://arxiv.org/abs/2005.00986
AUTHORS: Mengyun Shi ; Van Dyk Lewis
HIGHLIGHT: To improve the efficiency of data analysis of such image-based information and lower the cost of analyzing fashion images, this study proposes a data-driven quantitative abstracting approach using an artificial intelligence (A.I.) algorithm.
95, TITLE: Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction
http://arxiv.org/abs/2005.00987
AUTHORS: Masahiro Kaneko ; Masato Mita ; Shun Kiyono ; Jun Suzuki ; Kentaro Inui
COMMENTS: Accepted as a short paper to the 58th Annual Conference of the Association for Computational Linguistics (ACL-2020)
HIGHLIGHT: This paper investigates how to effectively incorporate a pre-trained masked language model (MLM), such as BERT, into an encoder-decoder (EncDec) model for grammatical error correction (GEC).
96, TITLE: Joint-SRVDNet: Joint Super Resolution and Vehicle Detection Network
http://arxiv.org/abs/2005.00983
AUTHORS: Moktari Mostofa ; Syeda Nyma Ferdous ; Benjamin S. Riggan ; Nasser M. Nasrabadi
HIGHLIGHT: To address this problem, we propose a Joint Super-Resolution and Vehicle DetectionNetwork (Joint-SRVDNet) that tries to generate discriminative, high-resolution images of vehicles fromlow-resolution aerial images.
97, TITLE: pyBART: Evidence-based Syntactic Transformations for IE
http://arxiv.org/abs/2005.01306
AUTHORS: Aryeh Tiktinsky ; Yoav Goldberg ; Reut Tsarfaty
HIGHLIGHT: We introduce a broad-coverage, data-driven and linguistically sound set of transformations, that makes event-structure and many lexical relations explicit.
98, TITLE: NLP in FinTech Applications: Past, Present and Future
http://arxiv.org/abs/2005.01320
AUTHORS: Chung-Chi Chen ; Hen-Hsen Huang ; Hsin-Hsi Chen
HIGHLIGHT: In this position paper, we focus on the researches applying natural language processing (NLP) technologies in the finance domain.
99, TITLE: DoQA -- Accessing Domain-Specific FAQs via Conversational QA
http://arxiv.org/abs/2005.01328
AUTHORS: Jon Ander Campos ; Arantxa Otegi ; Aitor Soroa ; Jan Deriu ; Mark Cieliebak ; Eneko Agirre
COMMENTS: Accepted at ACL 2020. 13 pages 4 figures
HIGHLIGHT: The goal of this work is to build conversational Question Answering (QA) interfaces for the large body of domain-specific information available in FAQ sites.
100, TITLE: Span programs and quantum time complexity
http://arxiv.org/abs/2005.01323
AUTHORS: Arjan Cornelissen ; Stacey Jeffery ; Maris Ozols ; Alvaro Piedrafita
COMMENTS: 54 pages, 2 figures
HIGHLIGHT: In this work, we prove an analogous connection for quantum time complexity.
101, TITLE: From SPMRL to NMRL: What Did We Learn (and Unlearn) in a Decade of Parsing Morphologically-Rich Languages (MRLs)?
http://arxiv.org/abs/2005.01330
AUTHORS: Reut Tsarfaty ; Dan Bareket ; Stav Klein ; Amit Seker
HIGHLIGHT: Here we reflect on parsing MRLs in that decade, highlight the solutions and lessons learned for the architectural, modeling and lexical challenges in the pre-neural era, and argue that similar challenges re-emerge in neural architectures for MRLs.
102, TITLE: A Comparative Study of Image Quality Assessment Models through Perceptual Optimization
http://arxiv.org/abs/2005.01338
AUTHORS: Keyan Ding ; Kede Ma ; Shiqi Wang ; Eero P. Simoncelli
HIGHLIGHT: Perceptual datasets (e.g., LIVE and TID2013) gathered for this purpose provide useful benchmarks for improving IQA methods, but their heavy use creates a risk of overfitting.
103, TITLE: A Model-driven Deep Neural Network for Single Image Rain Removal
http://arxiv.org/abs/2005.01333
AUTHORS: Hong Wang ; Qi Xie ; Qian Zhao ; Deyu Meng
HIGHLIGHT: To this issue, in this paper, we propose a model-driven deep neural network for the task, with fully interpretable network structures.
104, TITLE: The Sensitivity of Language Models and Humans to Winograd Schema Perturbations
http://arxiv.org/abs/2005.01348
AUTHORS: Mostafa Abdou ; Vinit Ravishankar ; Maria Barrett ; Yonatan Belinkov ; Desmond Elliott ; Anders Søgaard
COMMENTS: ACL 2020
HIGHLIGHT: Our results highlight interesting differences between humans and language models: language models are more sensitive to a number or gender alternations and synonym replacements than humans, and humans are more stable and consistent in their predictions, maintain a much higher absolute performance, and perform better on non-associative instances than associative ones.
105, TITLE: How to Train Your Dragon: Tamed Warping Network for Semantic Video Segmentation
http://arxiv.org/abs/2005.01344
AUTHORS: Junyi Feng ; Songyuan Li ; Yifeng Chen ; Fuxian Huang ; Jiabao Cui ; Xi Li
HIGHLIGHT: In this paper, we propose to introduce a simple and effective correction stage right after the warping stage to form a framework named Tamed Warping Network (TWNet), aiming to improve the accuracy and robustness of warping-based models.
106, TITLE: Anchors based method for fingertips position from a monocular RGB image using Deep Neural Network
http://arxiv.org/abs/2005.01351
AUTHORS: Purnendu Mishra ; Kishor Sarawadekar
COMMENTS: 10 pages, 10 figures
HIGHLIGHT: In this paper, we propose a deep neural network (DNN) based methodology to estimate the fingertips position.
107, TITLE: On Systematically Building a Controlled Natural Language for Functional Requirements
http://arxiv.org/abs/2005.01355
AUTHORS: Alvaro Veizaga ; Mauricio Alferez ; Damiano Torre ; Mehrdad Sabetzadeh ; Lionel Briand
HIGHLIGHT: Our contributions draw on 15 representative SRSs, collectively containing 3215 NL requirements statements from the financial domain.
108, TITLE: Synchronization of Deterministic Visibly Push-Down Automata
http://arxiv.org/abs/2005.01374
AUTHORS: Henning Fernau ; Petra Wolf
HIGHLIGHT: We generalize the concept of synchronizing words for finite automata, which map all states of the automata to the same state, to deterministic visibly push-down automata.
109, TITLE: Introducing the VoicePrivacy Initiative
http://arxiv.org/abs/2005.01387
AUTHORS: Natalia Tomashenko ; Brij Mohan Lal Srivastava ; Xin Wang ; Emmanuel Vincent ; Andreas Nautsch ; Junichi Yamagishi ; Nicholas Evans ; Jose Patino ; Jean-François Bonastre ; Paul-Gauthier Noé ; Massimiliano Todisco
COMMENTS: Submitted to Interspeech 2020
HIGHLIGHT: In this paper, we formulate the voice anonymization task selected for the VoicePrivacy 2020 Challenge and describe the datasets used for system development and evaluation.
110, TITLE: Monitoring COVID-19 social distancing with person detection and tracking via fine-tuned YOLO v3 and Deepsort techniques
http://arxiv.org/abs/2005.01385
AUTHORS: Narinder Singh Punn ; Sanjay Kumar Sonbhadra ; Sonali Agarwal
HIGHLIGHT: Motivated by this notion, this article proposes a deep learning based framework for automating the task of monitoring social distancing using surveillance video.
111, TITLE: Synchronizing Deterministic Push-Down Automata Can Be Really Hard
http://arxiv.org/abs/2005.01381
AUTHORS: Henning Fernau ; Petra Wolf ; Tomoyuki Yamakami
HIGHLIGHT: In this paper, we extend this algorithmic question to deterministic automata beyond finite automata.
112, TITLE: Explaining AI-based Decision Support Systems using Concept Localization Maps
http://arxiv.org/abs/2005.01399
AUTHORS: Adriano Lucieri ; Muhammad Naseer Bajwa ; Andreas Dengel ; Sheraz Ahmed
COMMENTS: Submitted to ICANN2020
HIGHLIGHT: This paper introduces Concept Localization Maps (CLMs), which is a novel approach towards explainable image classifiers employed as DSS. To better understand the effectiveness of the proposed method, we generated a new synthetic dataset called Simple Concept DataBase (SCDB) that includes annotations for 10 distinguishable concepts, and made it publicly available.
113, TITLE: The complexity of approximating the complex-valued Potts model
http://arxiv.org/abs/2005.01076
AUTHORS: Andreas Galanis ; Leslie Ann Goldberg ; Andrés Herrera-Poyatos
COMMENTS: 57 pages
HIGHLIGHT: We study the complexity of approximating the partition function of the $q$-state Potts model and the closely related Tutte polynomial for complex values of the underlying parameters.
114, TITLE: Autoencoders for strategic decision support
http://arxiv.org/abs/2005.01075
AUTHORS: Sam Verboven ; Jeroen Berrevoets ; Chris Wuytens ; Bart Baesens ; Wouter Verbeke
HIGHLIGHT: We introduce and extend the use of autoencoders to provide strategically relevant granular feedback.
115, TITLE: A Multialternative Neural Decision Process
http://arxiv.org/abs/2005.01081
AUTHORS: Simone Cerreia-Vioglio ; Fabio Maccheroni ; Massimo Marinacci
HIGHLIGHT: We introduce an algorithmic decision process for multialternative choice that combines binary comparisons and Markovian exploration.
116, TITLE: Neural Data-to-Text Generation via Jointly Learning the Segmentation and Correspondence
http://arxiv.org/abs/2005.01096
AUTHORS: Xiaoyu Shen ; Ernie Chang ; Hui Su ; Jie Zhou ; Dietrich Klakow
COMMENTS: Accepted at ACL 2020
HIGHLIGHT: To address this concern, we propose to explicitly segment target text into fragment units and align them with their data correspondences.
117, TITLE: Remote Sensing Image Scene Classification Meets Deep Learning: Challenges, Methods, Benchmarks, and Opportunities
http://arxiv.org/abs/2005.01094
AUTHORS: Gong Cheng ; Xingxing Xie ; Junwei Han ; Lei Guo ; Gui-Song Xia
COMMENTS: 20 pages, 10 figures
HIGHLIGHT: To be specific, we discuss the main challenges of scene classification and survey (1) Autoencoder-based scene classification methods, (2) Convolutional Neural Network-based scene classification methods, and (3) Generative Adversarial Network-based scene classification methods. In addition, we introduce the benchmarks used for scene classification and summarize the performance of more than two dozens of representative algorithms on three commonly-used benchmark data sets.
118, TITLE: A Little Bit More: Bitplane-Wise Bit-Depth Recovery
http://arxiv.org/abs/2005.01091
AUTHORS: Abhijith Punnappurath ; Michael S. Brown
HIGHLIGHT: In contrast, we propose a training and inference strategy that recovers the residual image bitplane-by-bitplane.
119, TITLE: It is Time for New Perspectives on How to Fight Bloat in GP
http://arxiv.org/abs/2005.00603
AUTHORS: Francisco Fernández de Vega ; Gustavo Olague ; Francisco Chávez ; Daniel Lanza ; Wolfgang Banzhaf ; Erik Goodman
HIGHLIGHT: This paper considers time and space as two sides of a single coin when devising a more natural method for fighting bloat.
120, TITLE: Probing Text Models for Common Ground with Visual Representations
http://arxiv.org/abs/2005.00619
AUTHORS: Gabriel Ilharco ; Rowan Zellers ; Ali Farhadi ; Hannaneh Hajishirzi
HIGHLIGHT: To better understand how text models are connected to our visual perceptions, we propose a method for examining the similarities between neural representations extracted from words in text and objects in images.
121, TITLE: Neural Lyapunov Control
http://arxiv.org/abs/2005.00611
AUTHORS: Ya-Chien Chang ; Nima Roohi ; Sicun Gao
COMMENTS: Published in NeurIPS 2019
HIGHLIGHT: We propose new methods for learning control policies and neural network Lyapunov functions for nonlinear control problems, with provable guarantee of stability.
122, TITLE: Multi-Dimensional Gender Bias Classification
http://arxiv.org/abs/2005.00614
AUTHORS: Emily Dinan ; Angela Fan ; Ledell Wu ; Jason Weston ; Douwe Kiela ; Adina Williams
HIGHLIGHT: In this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions: bias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker. Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information. In addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites.
123, TITLE: A Controllable Model of Grounded Response Generation
http://arxiv.org/abs/2005.00613
AUTHORS: Zeqiu Wu ; Michel Galley ; Chris Brockett ; Yizhe Zhang ; Xiang Gao ; Chris Quirk ; Rik Koncel-Kedziorski ; Jianfeng Gao ; Hannaneh Hajishirzi ; Mari Ostendorf ; Bill Dolan
HIGHLIGHT: We propose a framework that we call controllable grounded response generation (CGRG), in which lexical control phrases are either provided by an user or automatically extracted by a content planner from dialogue context and grounding knowledge.
124, TITLE: Constraint-Based Causal Discovery In The Presence Of Cycles
http://arxiv.org/abs/2005.00610
AUTHORS: Joris M. Mooij ; Tom Claassen
COMMENTS: Submitted to UAI 2020
HIGHLIGHT: In this work, we show that---surprisingly---the output of the Fast Causal Inference (FCI) algorithm is correct if it is applied to observational data generated by a system that involves feedback.
125, TITLE: Predicting Declension Class from Form and Meaning
http://arxiv.org/abs/2005.00626
AUTHORS: Adina Williams ; Tiago Pimentel ; Arya D. McCarthy ; Hagen Blix ; Eleanor Chodroff ; Ryan Cotterell
COMMENTS: 14 pages, 2 figures, the is the camera-ready version which was accepted at the 2020 Annual Conference of the Association for Computational Linguistics (ACL 2020)
HIGHLIGHT: Here, we investigate the strength of those clues.
126, TITLE: Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
http://arxiv.org/abs/2005.00628
AUTHORS: Yada Pruksachatkun ; Jason Phang ; Haokun Liu ; Phu Mon Htut ; Xiaoyi Zhang ; Richard Yuanzhe Pang ; Clara Vania ; Katharina Kann ; Samuel R. Bowman
COMMENTS: ACL 2020
HIGHLIGHT: To investigate this, we perform a large-scale study on the pretrained RoBERTa model with 110 intermediate-target task combinations.
127, TITLE: Minimally Supervised Categorization of Text with Metadata
http://arxiv.org/abs/2005.00624
AUTHORS: Yu Zhang ; Yu Meng ; Jiaxin Huang ; Frank F. Xu ; Xuan Wang ; Jiawei Han
COMMENTS: 10 pages; Accepted to SIGIR 2020
HIGHLIGHT: In recognition of these two challenges, we propose \textsc{MetaCat}, a minimally supervised framework to categorize text with metadata.
128, TITLE: A Joint Framework for Inductive Representation Learning and Explainable Reasoning in Knowledge Graphs
http://arxiv.org/abs/2005.00637
AUTHORS: Rajarshi Bhowmik ; Gerard de Melo
HIGHLIGHT: To overcome this issue, we propose an inductive representation learning framework that is able to learn representations of previously unseen entities.
129, TITLE: From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers
http://arxiv.org/abs/2005.00633
AUTHORS: Anne Lauscher ; Vinit Ravishankar ; Ivan Vulić ; Goran Glavaš
HIGHLIGHT: In this work, we analyze their limitations and show that cross-lingual transfer via massively multilingual transformers, much like transfer via cross-lingual word embeddings, is substantially less effective in resource-lean scenarios and for distant languages.
130, TITLE: We Need to Talk About Random Splits
http://arxiv.org/abs/2005.00636
AUTHORS: Anders Søgaard ; Sebastian Ebert ; Joost Bastings ; Katja Filippova
HIGHLIGHT: Instead of using multiple random splits, we propose that future benchmarks instead include multiple, independent test sets.
131, TITLE: Using Noisy Self-Reports to Predict Twitter User Demographics
http://arxiv.org/abs/2005.00635
AUTHORS: Zach Wood-Doughty ; Paiheng Xu ; Xiao Liu ; Mark Dredze
COMMENTS: The first two authors had an equal contribution
HIGHLIGHT: We present a method to identify self-reports of race and ethnicity from Twitter profile descriptions.
132, TITLE: KLEJ: Comprehensive Benchmark for Polish Language Understanding
http://arxiv.org/abs/2005.00630
AUTHORS: Piotr Rybak ; Robert Mroczkowski ; Janusz Tracz ; Ireneusz Gawlik
HIGHLIGHT: To alleviate this issue, we introduce a comprehensive multi-task benchmark for the Polish language understanding, accompanied by an online leaderboard.
133, TITLE: Evaluating and Aggregating Feature-based Model Explanations
http://arxiv.org/abs/2005.00631
AUTHORS: Umang Bhatt ; Adrian Weller ; José M. F. Moura
COMMENTS: Accepted at IJCAI 2020
HIGHLIGHT: This paper proposes quantitative evaluation criteria for feature-based explanations: low sensitivity, high faithfulness, and low complexity.
134, TITLE: Text and Causal Inference: A Review of Using Text to Remove Confounding from Causal Estimates
http://arxiv.org/abs/2005.00649
AUTHORS: Katherine A. Keith ; David Jensen ; Brendan O'Connor
COMMENTS: Accepted to ACL 2020
HIGHLIGHT: Despite increased attention on adjusting for confounding using text, there are still many open problems, which we highlight in this paper.
135, TITLE: Syntactic Question Abstraction and Retrieval for Data-Scarce Semantic Parsing
http://arxiv.org/abs/2005.00644
AUTHORS: Wonseok Hwang ; Jinyeong Yim ; Seunghyun Park ; Minjoon Seo
COMMENTS: Accepted to AKBC 2020 (conference paper)
HIGHLIGHT: Here, we propose Syntactic Question Abstraction and Retrieval (SQAR), a method to build a neural semantic parser that translates a natural language (NL) query to a SQL logical form (LF) with less than 1,000 annotated examples.
136, TITLE: Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering
http://arxiv.org/abs/2005.00646
AUTHORS: Yanlin Feng ; Xinyue Chen ; Bill Yuchen Lin ; Peifeng Wang ; Jun Yan ; Xiang Ren
COMMENTS: 13 pages, 8 figures, Project page: https://github.com/INK-USC/MHGRN
HIGHLIGHT: In this paper, we propose a novel knowledge-aware approach that equips PTLMs with a multi-hop relational reasoning module, named multi-hop graph relation networks (MHGRN).
137, TITLE: Spatial Dependency Parsing for 2D Document Understanding
http://arxiv.org/abs/2005.00642
AUTHORS: Wonseok Hwang ; Jinyeong Yim ; Seunghyun Park ; Sohee Yang ; Minjoon Seo
HIGHLIGHT: To tackle these issues, we propose SPADE$\spadesuit$ (SPatial DEpendency parser), an end-to-end spatial dependency parser that is serializer-free and capable of modeling an arbitrary number of information layers, making it suitable for parsing structure-rich documents such as receipts and multimodal documents such as name cards.
138, TITLE: Low-Dimensional Hyperbolic Knowledge Graph Embeddings
http://arxiv.org/abs/2005.00545
AUTHORS: Ines Chami ; Adva Wolf ; Da-Cheng Juan ; Frederic Sala ; Sujith Ravi ; Christopher Ré
HIGHLIGHT: In this work, we introduce a class of hyperbolic KG embedding models that simultaneously capture hierarchical and logical patterns.
139, TITLE: GoEmotions: A Dataset of Fine-Grained Emotions
http://arxiv.org/abs/2005.00547
AUTHORS: Dorottya Demszky ; Dana Movshovitz-Attias ; Jeongwoo Ko ; Alan Cowen ; Gaurav Nemade ; Sujith Ravi
COMMENTS: Accepted to ACL 2020
HIGHLIGHT: We introduce GoEmotions, the largest manually annotated dataset of 58k English Reddit comments, labeled for 27 emotion categories or Neutral.
140, TITLE: RigNet: Neural Rigging for Articulated Characters
http://arxiv.org/abs/2005.00559
AUTHORS: Zhan Xu ; Yang Zhou ; Evangelos Kalogerakis ; Chris Landreth ; Karan Singh
COMMENTS: SIGGRAPH 2020. Project page https://zhan-xu.github.io/rig-net/
HIGHLIGHT: We present RigNet, an end-to-end automated method for producing animation rigs from input character models.
141, TITLE: POINTER: Constrained Text Generation via Insertion-based Generative Pre-training
http://arxiv.org/abs/2005.00558
AUTHORS: Yizhe Zhang ; Guoyin Wang ; Chunyuan Li ; Zhe Gan ; Chris Brockett ; Bill Dolan
HIGHLIGHT: To address this challenge, we present POINTER, a simple yet novel insertion-based approach for hard-constrained text generation.
142, TITLE: Does Visual Self-Supervision Improve Learning of Speech Representations?
http://arxiv.org/abs/2005.01400
AUTHORS: Abhinav Shukla ; Stavros Petridis ; Maja Pantic
HIGHLIGHT: This work (1) investigates visual self-supervision via face reconstruction to guide the learning of audio representations; (2) proposes two audio-only self-supervision approaches for speech representation learning; (3) shows that a multi-task combination of the proposed visual and audio self-supervision is beneficial for learning richer features that are more robust in noisy conditions; (4) shows that self-supervised pretraining leads to a superior weight initialization, which is especially useful to prevent overfitting and lead to faster model convergence on smaller sized datasets.
143, TITLE: When BERT Plays the Lottery, All Tickets Are Winning
http://arxiv.org/abs/2005.00561
AUTHORS: Sai Prasanna ; Anna Rogers ; Anna Rumshisky
COMMENTS: work in progress
HIGHLIGHT: We consider this phenomenon from the perspective of the lottery ticket hypothesis.
144, TITLE: Smart Containers With Bidding Capacity: A Policy Gradient Algorithm for Semi-Cooperative Learning
http://arxiv.org/abs/2005.00565
AUTHORS: Wouter van Heeswijk
COMMENTS: 15 pages
HIGHLIGHT: To this end, we develop a reinforcement learning algorithm based on the policy gradient framework.
145, TITLE: Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning
http://arxiv.org/abs/2005.00571
AUTHORS: Deren Lei ; Gangrong Jiang ; Xiaotao Gu ; Kexuan Sun ; Yuning Mao ; Xiang Ren
HIGHLIGHT: In this paper, we propose to fuse these two paradigms to get the best of both worlds.
146, TITLE: When Ensembling Smaller Models is More Efficient than Single Large Models
http://arxiv.org/abs/2005.00570
AUTHORS: Dan Kondratyuk ; Mingxing Tan ; Matthew Brown ; Boqing Gong
HIGHLIGHT: This presents an interesting observation that output diversity in ensembling can often be more efficient than training larger models, especially when the models approach the size of what their dataset can foster.
147, TITLE: Exploring Pre-training with Alignments for RNN Transducer based End-to-End Speech Recognition
http://arxiv.org/abs/2005.00572
AUTHORS: Hu Hu ; Rui Zhao ; Jinyu Li ; Liang Lu ; Yifan Gong
COMMENTS: Accepted by ICASSP 2020
HIGHLIGHT: In this work, we conversely leverage external alignments to seed the RNN-T model.
148, TITLE: LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees
http://arxiv.org/abs/2005.01427
AUTHORS: Kacper Sokol ; Peter Flach
HIGHLIGHT: In this work we introduce a model-agnostic and post-hoc local explainability technique for black-box predictions called LIMEtree, which employs surrogate multi-output regression trees.
149, TITLE: Clinical Reading Comprehension: A Thorough Analysis of the emrQA Dataset
http://arxiv.org/abs/2005.00574
AUTHORS: Xiang Yue ; Bernal Jimenez Gutierrez ; Huan Sun
COMMENTS: Accepted by ACL 2020
HIGHLIGHT: In this paper, we provide an in-depth analysis of this dataset and the clinical reading comprehension (CliniRC) task.
150, TITLE: Learning to Complement Humans
http://arxiv.org/abs/2005.00582
AUTHORS: Bryan Wilder ; Eric Horvitz ; Ece Kamar
COMMENTS: Accepted at IJCAI 2020
HIGHLIGHT: To date, systems aimed at complementing the skills of people have employed models trained to be as accurate as possible in isolation.
151, TITLE: Multi-scale Transformer Language Models
http://arxiv.org/abs/2005.00581
AUTHORS: Sandeep Subramanian ; Ronan Collobert ; Marc'Aurelio Ranzato ; Y-Lan Boureau
HIGHLIGHT: We investigate multi-scale transformer language models that learn representations of text at multiple scales, and present three different architectures that have an inductive bias to handle the hierarchical nature of language.
152, TITLE: Correlating Edge, Pose with Parsing
http://arxiv.org/abs/2005.01431
AUTHORS: Ziwei Zhang ; Chi Su ; Liang Zheng ; Xiaodong Xie
COMMENTS: CVPR2020
HIGHLIGHT: To capture such correlations, we propose a Correlation Parsing Machine (CorrPM) employing a heterogeneous non-local block to discover the spatial affinity among feature maps from the edge, pose and parsing.
153, TITLE: Learning an Unreferenced Metric for Online Dialogue Evaluation
http://arxiv.org/abs/2005.00583
AUTHORS: Koustuv Sinha ; Prasanna Parthasarathi ; Jasmine Wang ; Ryan Lowe ; William L. Hamilton ; Joelle Pineau
COMMENTS: Accepted at ACL 2020, 5 pages
HIGHLIGHT: Here, we propose an unreferenced automated evaluation metric that uses large pre-trained language models to extract latent representations of utterances, and leverages the temporal transitions that exist between them.
154, TITLE: Evaluating Robustness to Input Perturbations for Neural Machine Translation
http://arxiv.org/abs/2005.00580
AUTHORS: Xing Niu ; Prashant Mathur ; Georgiana Dinu ; Yaser Al-Onaizan
COMMENTS: Accepted at ACL 2020
HIGHLIGHT: This paper proposes additional metrics which measure the relative degradation and changes in translation when small perturbations are added to the input.
155, TITLE: Global Table Extractor (GTE): A Framework for Joint Table Identification and Cell Structure Recognition Using Visual Context
http://arxiv.org/abs/2005.00589
AUTHORS: Xinyi Zheng ; Doug Burdick ; Lucian Popa ; Nancy Xin Ru Wang
HIGHLIGHT: We present Global Table Extractor (GTE), a vision-guided systematic framework for joint table detection and cell structured recognition, which could be built on top of any object detection model.
156, TITLE: Automated eye disease classification method from anterior eye image using anatomical structure focused image classification technique
http://arxiv.org/abs/2005.01433
AUTHORS: Masahiro Oda ; Takefumi Yamaguchi ; Hideki Fukuoka ; Yuta Ueno ; Kensaku Mori
COMMENTS: Accepted paper as a poster presentation at SPIE Medical Imaging 2020, Houston, TX, USA
HIGHLIGHT: This paper presents an automated classification method of infective and non-infective diseases from anterior eye images.
157, TITLE: Strong subalgebras and the Constraint Satisfaction Problem
http://arxiv.org/abs/2005.00593
AUTHORS: Dmitriy Zhuk
HIGHLIGHT: In this paper we consider one of two main ingredients of my proof, that is, strong subalgebras that allow us to reduce domains of the variables iteratively.
158, TITLE: Stochastic Sparse Subspace Clustering
http://arxiv.org/abs/2005.01449
AUTHORS: Ying Chen ; Chun-Guang Li ; Chong You
COMMENTS: 16 pages, 9 figures and 8 tables. This work is accepted by IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020
HIGHLIGHT: We introduce dropout to address the issue of over-segmentation, which is based on randomly dropping out data points in self-expressive model.
159, TITLE: Learning from Noisy Labels with Noise Modeling Network
http://arxiv.org/abs/2005.00596
AUTHORS: Zhuolin Jiang ; Jan Silovsky ; Man-Hung Siu ; William Hartmann ; Herbert Gish ; Sancar Adali
HIGHLIGHT: In this paper, we extend the state-of the-art of training classifiers to jointly deal with both forms of errorful data.
160, TITLE: CARRADA Dataset: Camera and Automotive Radar with Range-Angle-Doppler Annotations
http://arxiv.org/abs/2005.01456
AUTHORS: A. Ouaknine ; A. Newson ; J. Rebut ; F. Tupin ; P. Pérez
COMMENTS: 8 pages, 5 figues. Submitted to ICPR 2020
HIGHLIGHT: In this work, we introduce CARRADA, a dataset of synchronized camera and radar recordings with range-angle-Doppler annotations.
161, TITLE: Reducing graph transversals via edge contractions
http://arxiv.org/abs/2005.01460
AUTHORS: Paloma T. Lima ; Vinicius F. dos Santos ; Ignasi Sau ; Uéverton S. Souza
COMMENTS: 18 pages, 2 figures
HIGHLIGHT: We focus on graph parameters defined as the minimum size of a vertex set that hits all the occurrences of graphs in a collection ${\cal H}$ according to a fixed containment relation.
162, TITLE: Spiking Neural Networks Hardware Implementations and Challenges: a Survey
http://arxiv.org/abs/2005.01467
AUTHORS: Maxence Bouvier ; Alexandre Valentian ; Thomas Mesquida ; François Rummens ; Marina Reyboz ; Elisa Vianello ; Edith Beigné
COMMENTS: Pre-print version of the file authorized for publication
HIGHLIGHT: The scope of existing solutions is extensive; we thus present the general framework and study on a case-by-case basis the relevant particularities.
163, TITLE: Using Context in Neural Machine Translation Training Objectives
http://arxiv.org/abs/2005.01483
AUTHORS: Danielle Saunders ; Felix Stahlberg ; Bill Byrne
COMMENTS: ACL 2020
HIGHLIGHT: We present Neural Machine Translation (NMT) training using document-level metrics with batch-level documents.
164, TITLE: An Accurate Model for Predicting the (Graded) Effect of Context in Word Similarity Based on Bert
http://arxiv.org/abs/2005.01006
AUTHORS: Wei Bao ; Hongshu Che ; Jiandong Zhang
HIGHLIGHT: We apply several methods in calculating the distance between two embedding vector generated by Bidirectional Encoder Representation from Transformer (BERT).
165, TITLE: Explaining How Deep Neural Networks Forget by Deep Visualization
http://arxiv.org/abs/2005.01004
AUTHORS: Giang Nguyen ; Shuan Chen ; Tae Joon Jun ; Daeyoung Kim
COMMENTS: 9 pages, 4 figures, 1 table. arXiv admin note: substantial text overlap with arXiv:2001.01578
HIGHLIGHT: Taking the advantages of interpretable machine learning (interpretable ML), this paper proposes a novel tool called Catastrophic Forgetting Dissector (or CFD) to explain catastrophic forgetting in continual learning settings.
166, TITLE: Towards A Sign Language Gloss Representation Of Modern Standard Arabic
http://arxiv.org/abs/2005.01497
AUTHORS: Salma El Anigri ; Mohammed Majid Himmi ; Abdelhak Mahmoudi
COMMENTS: 4 pages,2 figures, AfricaNLP2020 workshop ICLR 2020
HIGHLIGHT: In this work, we are interested in the translation of Modern Standard Arabic(MSAr) into sign language.
167, TITLE: Certified Semantics for Relational Programming
http://arxiv.org/abs/2005.01018
AUTHORS: Dmitry Rozplokhas ; Andrey Vyatkin ; Dmitry Boulytchev
HIGHLIGHT: We present a formal study of semantics for relational programming language miniKanren.
168, TITLE: Lupulus: A Flexible Hardware Accelerator for Neural Networks
http://arxiv.org/abs/2005.01016
AUTHORS: Andreas Toftegaard Kristensen ; Robert Giterman ; Alexios Balatsoukas-Stimming ; Andreas Burg
COMMENTS: To be presented at the 2020 International Conference on Acoustics, Speech, and Signal Processing
HIGHLIGHT: In this work, we present a flexible hardware accelerator for neural networks, called Lupulus, supporting various methods for scheduling and mapping of operations onto the accelerator.
169, TITLE: Feature-metric Registration: A Fast Semi-supervised Approach for Robust Point Cloud Registration without Correspondences
http://arxiv.org/abs/2005.01014
AUTHORS: Xiaoshui Huang ; Guofeng Mei ; Jian Zhang
COMMENTS: CVPR2020 final
HIGHLIGHT: We present a fast feature-metric point cloud registration framework, which enforces the optimisation of registration by minimising a feature-metric projection error without correspondences.
170, TITLE: On the Benefits of Models with Perceptually-Aligned Gradients
http://arxiv.org/abs/2005.01499
AUTHORS: Gunjan Aggarwal ; Abhishek Sinha ; Nupur Kumari ; Mayank Singh
COMMENTS: Accepted at ICLR 2020 Workshop: Towards Trustworthy ML
HIGHLIGHT: In this paper, we leverage models with interpretable perceptually-aligned features and show that adversarial training with low max-perturbation bound can improve the performance of models for zero-shot and weakly supervised localization tasks.
171, TITLE: A Position Aware Decay Weighted Network for Aspect based Sentiment Analysis
http://arxiv.org/abs/2005.01027
AUTHORS: Avinash Madasu ; Vijjini Anvesh Rao
COMMENTS: Accepted Full Paper at 25th International Conference on Applications of Natural Language to Information Systems, June 2020, DFKI Saarbr\"ucken, Germany
HIGHLIGHT: In this paper, we propose a model that leverages the positional information of the aspect.
172, TITLE: Locally testable codes via high-dimensional expanders
http://arxiv.org/abs/2005.01045
AUTHORS: Yotam Dikstein ; Irit Dinur ; Prahladh Harsha ; Noga Ron-Zewi
HIGHLIGHT: In this paper, we present a new approach towards constructing such LTCs using the machinery of high-dimensional expanders.
173, TITLE: Fusion of visible and infrared images via complex function
http://arxiv.org/abs/2005.01047
AUTHORS: Ya. Ye. Khaustov ; D. Ye ; Ye. Ryzhov ; E. Lychkovskyy ; Yu. A. Nastishin
COMMENTS: 12 pages with 7 figures, submitted to Military Technical Collection 22 (2020) see http://vtz.asv.gov.ua
HIGHLIGHT: We propose an algorithm for the fusion of partial images collected from the visual and infrared cameras such that the visual and infrared images are the real and imaginary parts of a complex function.
174, TITLE: NTIRE 2020 Challenge on Perceptual Extreme Super-Resolution: Methods and Results
http://arxiv.org/abs/2005.01056
AUTHORS: Kai Zhang ; Shuhang Gu ; Radu Timofte ; Taizhang Shang ; Qiuju Dai ; Shengchen Zhu ; Tong Yang ; Yandong Guo ; Younghyun Jo ; Sejong Yang ; Seon Joo Kim ; Lin Zha ; Jiande Jiang ; Xinbo Gao ; Wen Lu ; Jing Liu ; Kwangjin Yoon ; Taegyun Jeon ; Kazutoshi Akita ; Takeru Ooba ; Norimichi Ukita ; Zhipeng Luo ; Yuehan Yao ; Zhenyu Xu ; Dongliang He ; Wenhao Wu ; Yukang Ding ; Chao Li ; Fu Li ; Shilei Wen ; Jianwei Li ; Fuzhi Yang ; Huan Yang ; Jianlong Fu ; Byung-Hoon Kim ; JaeHyun Baek ; Jong Chul Ye ; Yuchen Fan ; Thomas S. Huang ; Junyeop Lee ; Bokyeung Lee ; Jungki Min ; Gwantae Kim ; Kanghyu Lee ; Jaihyun Park ; Mykola Mykhailych ; Haoyu Zhong ; Yukai Shi ; Xiaojun Yang ; Zhijing Yang ; Liang Lin ; Tongtong Zhao ; Jinjia Peng ; Huibing Wang ; Zhi Jin ; Jiahao Wu ; Yifu Chen ; Chenming Shang ; Huanrong Zhang ; Jeongki Min ; Hrishikesh P S ; Densen Puthussery ; Jiji C V
COMMENTS: CVPRW 2020
HIGHLIGHT: This paper reviews the NTIRE 2020 challenge on perceptual extreme super-resolution with focus on proposed solutions and results.
175, TITLE: A Two-Stage Masked LM Method for Term Set Expansion
http://arxiv.org/abs/2005.01063
AUTHORS: Guy Kushilevitz ; Shaul Markovitch ; Yoav Goldberg
COMMENTS: short paper accepted to acl 2020
HIGHLIGHT: We harness the power of neural masked language models (MLM) and propose a novel TSE algorithm, which combines the pattern-based and distributional approaches.
176, TITLE: On the Limitations of Cross-lingual Encoders as Exposed by Reference-Free Machine Translation Evaluation
http://arxiv.org/abs/2005.01196
AUTHORS: Wei Zhao ; Goran Glavaš ; Maxime Peyrard ; Yang Gao ; Robert West ; Steffen Eger
COMMENTS: ACL2020 Camera Ready
HIGHLIGHT: In this paper, we concern ourselves with reference-free machine translation (MT) evaluation where we directly compare source texts to (sometimes low-quality) system translations, which represents a natural adversarial setup for multilingual encoders.
177, TITLE: System Metamodel Formalism
http://arxiv.org/abs/2005.01192
AUTHORS: Patrik Christen
COMMENTS: 12 pages, 1 table
HIGHLIGHT: In this study, a mathematical definition of the system metamodel and its model parameters is provided and the creation of concrete mathematical models, i.e. cellular automata and artificial neural networks, from it is proved.
178, TITLE: Influence Paths for Characterizing Subject-Verb Number Agreement in LSTM Language Models
http://arxiv.org/abs/2005.01190
AUTHORS: Kaiji Lu ; Piotr Mardziel ; Klas Leino ; Matt Fedrikson ; Anupam Datta
COMMENTS: ACL 2020
HIGHLIGHT: We introduce *influence paths*, a causal account of structural properties as carried by paths across gates and neurons of a recurrent neural network.
179, TITLE: A Benchmark for Structured Procedural Knowledge Extraction from Cooking Videos
http://arxiv.org/abs/2005.00706
AUTHORS: Frank F. Xu ; Lei Ji ; Botian Shi ; Junyi Du ; Graham Neubig ; Yonatan Bisk ; Nan Duan
HIGHLIGHT: Humans often learn this knowledge from instructional text and video, and in this paper we aim to perform automatic extraction of this knowledge in a similar way. We first create a manually annotated, large evaluation dataset including over350 instructional cooking videos along with over 15,000 English sentences in transcripts spanning over 89 recipes.