title
stringlengths 23
150
| authors
stringlengths 12
456
| abstract
stringlengths 262
1.75k
| url
stringlengths 42
44
| detail_url
stringclasses 1
value | abs
stringclasses 1
value | OpenReview
stringclasses 1
value | Download PDF
stringlengths 46
48
| tags
stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|
DirectProbe: Studying Representations without Classifiers
|
Yichu Zhou, Vivek Srikumar
|
Understanding how linguistic structure is encoded in contextualized embedding could help explain their impressive performance across NLP. Existing approaches for probing them usually call for training classifiers and use the accuracy, mutual information, or complexity as a proxy for the representation’s goodness. In this work, we argue that doing so can be unreliable because different representations may need different classifiers. We develop a heuristic, DirectProbe, that directly studies the geometry of a representation by building upon the notion of a version space for a task. Experiments with several linguistic tasks and contextualized embeddings show that, even without training classifiers, DirectProbe can shine lights on how an embedding space represents labels and also anticipate the classifier performance for the representation.
|
https://aclanthology.org/2021.naacl-main.401
|
https://aclanthology.org/2021.naacl-main.401.pdf
|
NAACL 2021
|
|||
Evaluating the Values of Sources in Transfer Learning
|
Md Rizwan Parvez, Kai-Wei Chang
|
Transfer learning that adapts a model trained on data-rich sources to low-resource targets has been widely applied in natural language processing (NLP). However, when training a transfer model over multiple sources, not every source is equally useful for the target. To better transfer a model, it is essential to understand the values of the sources. In this paper, we develop , an efficient source valuation framework for quantifying the usefulness of the sources (e.g., ) in transfer learning based on the Shapley value method. Experiments and comprehensive analyses on both cross-domain and cross-lingual transfers demonstrate that our framework is not only effective in choosing useful transfer sources but also the source values match the intuitive source-target similarity.
|
https://aclanthology.org/2021.naacl-main.402
|
https://aclanthology.org/2021.naacl-main.402.pdf
|
NAACL 2021
|
|||
Too Much in Common: Shifting of Embeddings in Transformer Language Models and its Implications
|
Daniel Biś, Maksim Podkorytov, Xiuwen Liu
|
The success of language models based on the Transformer architecture appears to be inconsistent with observed anisotropic properties of representations learned by such models. We resolve this by showing, contrary to previous studies, that the representations do not occupy a narrow cone, but rather drift in common directions. At any training step, all of the embeddings except for the ground-truth target embedding are updated with gradient in the same direction. Compounded over the training set, the embeddings drift and share common components, manifested in their shape in all the models we have empirically tested. Our experiments show that isotropy can be restored using a simple transformation.
|
https://aclanthology.org/2021.naacl-main.403
|
https://aclanthology.org/2021.naacl-main.403.pdf
|
NAACL 2021
|
|||
On the Inductive Bias of Masked Language Modeling: From Statistical to Syntactic Dependencies
|
Tianyi Zhang, Tatsunori B. Hashimoto
|
We study how masking and predicting tokens in an unsupervised fashion can give rise to linguistic structures and downstream performance gains. Recent theories have suggested that pretrained language models acquire useful inductive biases through masks that implicitly act as cloze reductions for downstream tasks. While appealing, we show that the success of the random masking strategy used in practice cannot be explained by such cloze-like masks alone. We construct cloze-like masks using task-specific lexicons for three different classification datasets and show that the majority of pretrained performance gains come from generic masks that are not associated with the lexicon. To explain the empirical success of these generic masks, we demonstrate a correspondence between the Masked Language Model (MLM) objective and existing methods for learning statistical dependencies in graphical models. Using this, we derive a method for extracting these learned statistical dependencies in MLMs and show that these dependencies encode useful inductive biases in the form of syntactic structures. In an unsupervised parsing evaluation, simply forming a minimum spanning tree on the implied statistical dependence structure outperforms a classic method for unsupervised parsing (58.74 vs. 55.91 UUAS).
|
https://aclanthology.org/2021.naacl-main.404
|
https://aclanthology.org/2021.naacl-main.404.pdf
|
NAACL 2021
|
|||
Limitations of Autoregressive Models and Their Alternatives
|
Chu-Cheng Lin, Aaron Jaech, Xin Li, Matthew R. Gormley, Jason Eisner
|
{'i': ['hard', 'easy', 'superpolynomially'], '#text': 'Standard autoregressive language models perform only polynomial-time computation to compute the probability of the next symbol. While this is attractive, it means they cannot model distributions whose next-symbol probability is to compute. Indeed, they cannot even model them well enough to solve associated decision problems for which an engineer might want to consult a language model. These limitations apply no matter how much computation and data are used to train the model, unless the model is given access to oracle parameters that grow in sequence length. Thus, simply training larger autoregressive language models is not a panacea for NLP. Alternatives include energy-based models (which give up efficient sampling) and latent-variable autoregressive models (which give up efficient scoring of a given string). Both are powerful enough to escape the above limitations.'}
|
https://aclanthology.org/2021.naacl-main.405
|
https://aclanthology.org/2021.naacl-main.405.pdf
|
NAACL 2021
|
|||
On the Transformer Growth for Progressive BERT Training
|
Xiaotao Gu, Liyuan Liu, Hongkun Yu, Jing Li, Chen Chen, Jiawei Han
|
As the excessive pre-training cost arouses the need to improve efficiency, considerable efforts have been made to train BERT progressively–start from an inferior but low-cost model and gradually increase the computational complexity. Our objective is to help advance the understanding of such Transformer growth and discover principles that guide progressive training. First, we find that similar to network architecture selection, Transformer growth also favors compound scaling. Specifically, while existing methods only conduct network growth in a single dimension, we observe that it is beneficial to use compound growth operators and balance multiple dimensions (e.g., depth, width, and input length of the model). Moreover, we explore alternative growth operators in each dimension via controlled comparison to give practical guidance for operator selection. In light of our analyses, the proposed method CompoundGrow speeds up BERT pre-training by 73.6% and 82.2% for the base and large models respectively while achieving comparable performances.
|
https://aclanthology.org/2021.naacl-main.406
|
https://aclanthology.org/2021.naacl-main.406.pdf
|
NAACL 2021
|
|||
Revisiting Simple Neural Probabilistic Language Models
|
Simeng Sun, Mohit Iyyer
|
Recent progress in language modeling has been driven not only by advances in neural architectures, but also through hardware and optimization improvements. In this paper, we revisit the neural probabilistic language model (NPLM) of Bengio et al. (2003), which simply concatenates word embeddings within a fixed window and passes the result through a feed-forward network to predict the next word. When scaled up to modern hardware, this model (despite its many limitations) performs much better than expected on word-level language model benchmarks. Our analysis reveals that the NPLM achieves lower perplexity than a baseline Transformer with short input contexts but struggles to handle long-term dependencies. Inspired by this result, we modify the Transformer by replacing its first self-attention layer with the NPLM’s local concatenation layer, which results in small but consistent perplexity decreases across three word-level language modeling datasets.
|
https://aclanthology.org/2021.naacl-main.407
|
https://aclanthology.org/2021.naacl-main.407.pdf
|
NAACL 2021
|
|||
ReadTwice: Reading Very Large Documents with Memories
|
Yury Zemlyanskiy, Joshua Ainslie, Michiel de Jong, Philip Pham, Ilya Eckstein, Fei Sha
|
Knowledge-intensive tasks such as question answering often require assimilating information from different sections of large inputs such as books or article collections. We propose ReadTwice, a simple and effective technique that combines several strengths of prior approaches to model long-range dependencies with Transformers. The main idea is to read text in small segments, in parallel, summarizing each segment into a memory table to be used in a second read of the text. We show that the method outperforms models of comparable size on several question answering (QA) datasets and sets a new state of the art on the challenging NarrativeQA task, with questions about entire books.
|
https://aclanthology.org/2021.naacl-main.408
|
https://aclanthology.org/2021.naacl-main.408.pdf
|
NAACL 2021
|
|||
SCRIPT: Self-Critic PreTraining of Transformers
|
Erik Nijkamp, Bo Pang, Ying Nian Wu, Caiming Xiong
|
We introduce Self-CRItic Pretraining Transformers (SCRIPT) for representation learning of text. The popular masked language modeling (MLM) pretraining methods like BERT replace some tokens with [MASK] and an encoder is trained to recover them, while ELECTRA trains a discriminator to detect replaced tokens proposed by a generator. In contrast, we train a language model as in MLM and further derive a discriminator or critic on top of the encoder without using any additional parameters. That is, the model itself is a critic. SCRIPT combines MLM training and discriminative training for learning rich representations and compute- and sample-efficiency. We demonstrate improved sample-efficiency in pretraining and enhanced representations evidenced by improved downstream task performance on GLUE and SQuAD over strong baselines. Also, the self-critic scores can be directly used as pseudo-log-likelihood for efficient scoring.
|
https://aclanthology.org/2021.naacl-main.409
|
https://aclanthology.org/2021.naacl-main.409.pdf
|
NAACL 2021
|
|||
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
|
Guanghui Qin, Jason Eisner
|
Natural-language prompts have recently been used to coax pretrained language models into performing other AI tasks, using a fill-in-the-blank paradigm (Petroni et al., 2019) or a few-shot extrapolation paradigm (Brown et al., 2020). For example, language models retain factual knowledge from their training corpora that can be extracted by asking them to “fill in the blank” in a sentential prompt. However, where does this prompt come from? We explore the idea of learning prompts by gradient descent—either fine-tuning prompts taken from previous work, or starting from random initialization. Our prompts consist of “soft words,” i.e., continuous vectors that are not necessarily word type embeddings from the language model. Furthermore, for each task, we optimize a mixture of prompts, learning which prompts are most effective and how to ensemble them. Across multiple English LMs and tasks, our approach hugely outperforms previous methods, showing that the implicit factual knowledge in language models was previously underestimated. Moreover, this knowledge is cheap to elicit: random initialization is nearly as good as informed initialization.
|
https://aclanthology.org/2021.naacl-main.410
|
https://aclanthology.org/2021.naacl-main.410.pdf
|
NAACL 2021
|
|||
Nutri-bullets Hybrid: Consensual Multi-document Summarization
|
Darsh Shah, Lili Yu, Tao Lei, Regina Barzilay
|
We present a method for generating comparative summaries that highlight similarities and contradictions in input documents. The key challenge in creating such summaries is the lack of large parallel training data required for training typical summarization systems. To this end, we introduce a hybrid generation approach inspired by traditional concept-to-text systems. To enable accurate comparison between different sources, the model first learns to extract pertinent relations from input documents. The content planning component uses deterministic operators to aggregate these relations after identifying a subset for inclusion into a summary. The surface realization component lexicalizes this information using a text-infilling language model. By separately modeling content selection and realization, we can effectively train them with limited annotations. We implemented and tested the model in the domain of nutrition and health – rife with inconsistencies. Compared to conventional methods, our framework leads to more faithful, relevant and aggregation-sensitive summarization – while being equally fluent.
|
https://aclanthology.org/2021.naacl-main.411
|
https://aclanthology.org/2021.naacl-main.411.pdf
|
NAACL 2021
|
|||
AVA: an Automatic eValuation Approach for Question Answering Systems
|
Thuy Vu, Alessandro Moschitti
|
We introduce AVA, an automatic evaluation approach for Question Answering, which given a set of questions associated with Gold Standard answers (references), can estimate system Accuracy. AVA uses Transformer-based language models to encode question, answer, and reference texts. This allows for effectively assessing answer correctness using similarity between the reference and an automatic answer, biased towards the question semantics. To design, train, and test AVA, we built multiple large training, development, and test sets on public and industrial benchmarks. Our innovative solutions achieve up to 74.7% F1 score in predicting human judgment for single answers. Additionally, AVA can be used to evaluate the overall system Accuracy with an error lower than 7% at 95% of confidence when measured on several QA systems.
|
https://aclanthology.org/2021.naacl-main.412
|
https://aclanthology.org/2021.naacl-main.412.pdf
|
NAACL 2021
|
|||
SpanPredict: Extraction of Predictive Document Spans with Neural Attention
|
Vivek Subramanian, Matthew Engelhard, Sam Berchuck, Liqun Chen, Ricardo Henao, Lawrence Carin
|
In many natural language processing applications, identifying predictive text can be as important as the predictions themselves. When predicting medical diagnoses, for example, identifying predictive content in clinical notes not only enhances interpretability, but also allows unknown, descriptive (i.e., text-based) risk factors to be identified. We here formalize this problem as predictive extraction and address it using a simple mechanism based on linear attention. Our method preserves differentiability, allowing scalable inference via stochastic gradient descent. Further, the model decomposes predictions into a sum of contributions of distinct text spans. Importantly, we require only document labels, not ground-truth spans. Results show that our model identifies semantically-cohesive spans and assigns them scores that agree with human ratings, while preserving classification performance.
|
https://aclanthology.org/2021.naacl-main.413
|
https://aclanthology.org/2021.naacl-main.413.pdf
|
NAACL 2021
|
|||
Text Editing by Command
|
Felix Faltings, Michel Galley, Gerold Hintz, Chris Brockett, Chris Quirk, Jianfeng Gao, Bill Dolan
|
A prevailing paradigm in neural text generation is one-shot generation, where text is produced in a single step. The one-shot setting is inadequate, however, when the constraints the user wishes to impose on the generated text are dynamic, especially when authoring longer documents. We address this limitation with an interactive text generation setting in which the user interacts with the system by issuing commands to edit existing text. To this end, we propose a novel text editing task, and introduce WikiDocEdits, a dataset of single-sentence edits crawled from Wikipedia. We show that our Interactive Editor, a transformer-based model trained on this dataset, outperforms baselines and obtains positive results in both automatic and human evaluations. We present empirical and qualitative analyses of this model’s performance.
|
https://aclanthology.org/2021.naacl-main.414
|
https://aclanthology.org/2021.naacl-main.414.pdf
|
NAACL 2021
|
|||
A Deep Metric Learning Approach to Account Linking
|
Aleem Khan, Elizabeth Fleming, Noah Schofield, Marcus Bishop, Nicholas Andrews
|
We consider the task of linking social media accounts that belong to the same author in an automated fashion on the basis of the content and meta-data of the corresponding document streams. We focus on learning an embedding that maps variable-sized samples of user activity–ranging from single posts to entire months of activity–to a vector space, where samples by the same author map to nearby points. Our approach does not require human-annotated data for training purposes, which allows us to leverage large amounts of social media content. The proposed model outperforms several competitive baselines under a novel evaluation framework modeled after established recognition benchmarks in other domains. Our method achieves high linking accuracy, even with small samples from accounts not seen at training time, a prerequisite for practical applications of the proposed linking framework.
|
https://aclanthology.org/2021.naacl-main.415
|
https://aclanthology.org/2021.naacl-main.415.pdf
|
NAACL 2021
|
|||
Improving Factual Completeness and Consistency of Image-to-Text Radiology Report Generation
|
Yasuhide Miura, Yuhao Zhang, Emily Tsai, Curtis Langlotz, Dan Jurafsky
|
Neural image-to-text radiology report generation systems offer the potential to improve radiology reporting by reducing the repetitive process of report drafting and identifying possible medical errors. However, existing report generation systems, despite achieving high performances on natural language generation metrics such as CIDEr or BLEU, still suffer from incomplete and inconsistent generations. Here we introduce two new simple rewards to encourage the generation of factually complete and consistent radiology reports: one that encourages the system to generate radiology domain entities consistent with the reference, and one that uses natural language inference to encourage these entities to be described in inferentially consistent ways. We combine these with the novel use of an existing semantic equivalence metric (BERTScore). We further propose a report generation system that optimizes these rewards via reinforcement learning. On two open radiology report datasets, our system substantially improved the F1 score of a clinical information extraction performance by +22.1 (Delta +63.9%). We further show via a human evaluation and a qualitative analysis that our system leads to generations that are more factually complete and consistent compared to the baselines.
|
https://aclanthology.org/2021.naacl-main.416
|
https://aclanthology.org/2021.naacl-main.416.pdf
|
NAACL 2021
|
|||
Multimodal End-to-End Sparse Model for Emotion Recognition
|
Wenliang Dai, Samuel Cahyawijaya, Zihan Liu, Pascale Fung
|
Existing works in multimodal affective computing tasks, such as emotion recognition and personality recognition, generally adopt a two-phase pipeline by first extracting feature representations for each single modality with hand crafted algorithms, and then performing end-to-end learning with extracted features. However, the extracted features are fixed and cannot be further fine-tuned on different target tasks, and manually finding feature extracting algorithms does not generalize or scale well to different tasks, which can lead to sub-optimal performance. In this paper, we develop a fully end-to-end model that connects the two phases and optimizes them jointly. In addition, we restructure the current datasets to enable the fully end-to-end training. Furthermore, to reduce the computational overhead brought by the end-to-end model, we introduce a sparse cross-modal attention mechanism for the feature extraction. Experimental results show that our fully end-to-end model significantly surpasses the current state-of-the-art models based on the two-phase pipeline. Moreover, by adding the sparse cross-modal attention, our model can maintain the performance with around half less computation in the feature extraction part of the model.
|
https://aclanthology.org/2021.naacl-main.417
|
https://aclanthology.org/2021.naacl-main.417.pdf
|
NAACL 2021
|
|||
MIMOQA: Multimodal Input Multimodal Output Question Answering
|
Hrituraj Singh, Anshul Nasery, Denil Mehta, Aishwarya Agarwal, Jatin Lamba, Balaji Vasan Srinivasan
|
Multimodal research has picked up significantly in the space of question answering with the task being extended to visual question answering, charts question answering as well as multimodal input question answering. However, all these explorations produce a unimodal textual output as the answer. In this paper, we propose a novel task - MIMOQA - Multimodal Input Multimodal Output Question Answering in which the output is also multimodal. Through human experiments, we empirically show that such multimodal outputs provide better cognitive understanding of the answers. We also propose a novel multimodal question-answering framework, MExBERT, that incorporates a joint textual and visual attention towards producing such a multimodal output. Our method relies on a novel multimodal dataset curated for this problem from publicly available unimodal datasets. We show the superior performance of MExBERT against strong baselines on both the automatic as well as human metrics.
|
https://aclanthology.org/2021.naacl-main.418
|
https://aclanthology.org/2021.naacl-main.418.pdf
|
NAACL 2021
|
|||
OCID-Ref: A 3D Robotic Dataset With Embodied Language For Clutter Scene Grounding
|
Ke-Jyun Wang, Yun-Hsuan Liu, Hung-Ting Su, Jen-Wei Wang, Yu-Siang Wang, Winston Hsu, Wen-Chin Chen
|
{'url': 'https://github.com/lluma/OCID-Ref', '#text': 'To effectively apply robots in working environments and assist humans, it is essential to develop and evaluate how visual grounding (VG) can affect machine performance on occluded objects. However, current VG works are limited in working environments, such as offices and warehouses, where objects are usually occluded due to space utilization issues. In our work, we propose a novel OCID-Ref dataset featuring a referring expression segmentation task with referring expressions of occluded objects. OCID-Ref consists of 305,694 referring expressions from 2,300 scenes with providing RGB image and point cloud inputs. To resolve challenging occlusion issues, we argue that it’s crucial to take advantage of both 2D and 3D signals to resolve challenging occlusion issues. Our experimental results demonstrate the effectiveness of aggregating 2D and 3D signals but referring to occluded objects still remains challenging for the modern visual grounding systems. OCID-Ref is publicly available at'}
|
https://aclanthology.org/2021.naacl-main.419
|
https://aclanthology.org/2021.naacl-main.419.pdf
|
NAACL 2021
|
|||
Unsupervised Vision-and-Language Pre-training Without Parallel Images and Captions
|
Liunian Harold Li, Haoxuan You, Zhecan Wang, Alireza Zareian, Shih-Fu Chang, Kai-Wei Chang
|
Pre-trained contextual vision-and-language (V&L) models have achieved impressive performance on various benchmarks. However, existing models require a large amount of parallel image-caption data for pre-training. Such data are costly to collect and require cumbersome curation. Inspired by unsupervised machine translation, we investigate if a strong V&L representation model can be learned through unsupervised pre-training without image-caption corpora. In particular, we propose to conduct “mask-and-predict” pre-training on text-only and image-only corpora and introduce the object tags detected by an object recognition model as anchor points to bridge two modalities. We find that such a simple approach achieves performance close to a model pre-trained with aligned data, on four English V&L benchmarks. Our work challenges the widely held notion that aligned data is necessary for V&L pre-training, while significantly reducing the amount of supervision needed for V&L models.
|
https://aclanthology.org/2021.naacl-main.420
|
https://aclanthology.org/2021.naacl-main.420.pdf
|
NAACL 2021
|
|||
Multitasking Inhibits Semantic Drift
|
Athul Paul Jacob, Mike Lewis, Jacob Andreas
|
When intelligent agents communicate to accomplish shared goals, how do these goals shape the agents’ language? We study the dynamics of learning in latent language policies (LLPs), in which instructor agents generate natural-language subgoal descriptions and executor agents map these descriptions to low-level actions. LLPs can solve challenging long-horizon reinforcement learning problems and provide a rich model for studying task-oriented language use. But previous work has found that LLP training is prone to semantic drift (use of messages in ways inconsistent with their original natural language meanings). Here, we demonstrate theoretically and empirically that multitask training is an effective counter to this problem: we prove that multitask training eliminates semantic drift in a well-studied family of signaling games, and show that multitask training of neural LLPs in a complex strategy game reduces drift and while improving sample efficiency.
|
https://aclanthology.org/2021.naacl-main.421
|
https://aclanthology.org/2021.naacl-main.421.pdf
|
NAACL 2021
|
|||
Probing Contextual Language Models for Common Ground with Visual Representations
|
Gabriel Ilharco, Rowan Zellers, Ali Farhadi, Hannaneh Hajishirzi
|
The success of large-scale contextual language models has attracted great interest in probing what is encoded in their representations. In this work, we consider a new question: to what extent contextual representations of concrete nouns are aligned with corresponding visual representations? We design a probing model that evaluates how effective are text-only representations in distinguishing between matching and non-matching visual representations. Our findings show that language representations alone provide a strong signal for retrieving image patches from the correct object categories. Moreover, they are effective in retrieving specific instances of image patches; textual context plays an important role in this process. Visually grounded language models slightly outperform text-only language models in instance retrieval, but greatly under-perform humans. We hope our analyses inspire future research in understanding and improving the visual capabilities of language models.
|
https://aclanthology.org/2021.naacl-main.422
|
https://aclanthology.org/2021.naacl-main.422.pdf
|
NAACL 2021
|
|||
BBAEG: Towards BERT-based Biomedical Adversarial Example Generation for Text Classification
|
Ishani Mondal
|
Healthcare predictive analytics aids medical decision-making, diagnosis prediction and drug review analysis. Therefore, prediction accuracy is an important criteria which also necessitates robust predictive language models. However, the models using deep learning have been proven vulnerable towards insignificantly perturbed input instances which are less likely to be misclassified by humans. Recent efforts of generating adversaries using rule-based synonyms and BERT-MLMs have been witnessed in general domain, but the ever-increasing biomedical literature poses unique challenges. We propose BBAEG (Biomedical BERT-based Adversarial Example Generation), a black-box attack algorithm for biomedical text classification, leveraging the strengths of both domain-specific synonym replacement for biomedical named entities and BERT-MLM predictions, spelling variation and number replacement. Through automatic and human evaluation on two datasets, we demonstrate that BBAEG performs stronger attack with better language fluency, semantic coherence as compared to prior work.
|
https://aclanthology.org/2021.naacl-main.423
|
https://aclanthology.org/2021.naacl-main.423.pdf
|
NAACL 2021
|
|||
Targeted Adversarial Training for Natural Language Understanding
|
Lis Pereira, Xiaodong Liu, Hao Cheng, Hoifung Poon, Jianfeng Gao, Ichiro Kobayashi
|
We present a simple yet effective Targeted Adversarial Training (TAT) algorithm to improve adversarial training for natural language understanding. The key idea is to introspect current mistakes and prioritize adversarial training steps to where the model errs the most. Experiments show that TAT can significantly improve accuracy over standard adversarial training on GLUE and attain new state-of-the-art zero-shot results on XNLI. Our code will be released upon acceptance of the paper.
|
https://aclanthology.org/2021.naacl-main.424
|
https://aclanthology.org/2021.naacl-main.424.pdf
|
NAACL 2021
|
|||
Latent-Optimized Adversarial Neural Transfer for Sarcasm Detection
|
Xu Guo, Boyang Li, Han Yu, Chunyan Miao
|
The existence of multiple datasets for sarcasm detection prompts us to apply transfer learning to exploit their commonality. The adversarial neural transfer (ANT) framework utilizes multiple loss terms that encourage the source-domain and the target-domain feature distributions to be similar while optimizing for domain-specific performance. However, these objectives may be in conflict, which can lead to optimization difficulties and sometimes diminished transfer. We propose a generalized latent optimization strategy that allows different losses to accommodate each other and improves training dynamics. The proposed method outperforms transfer learning and meta-learning baselines. In particular, we achieve 10.02% absolute performance gain over the previous state of the art on the iSarcasm dataset.
|
https://aclanthology.org/2021.naacl-main.425
|
https://aclanthology.org/2021.naacl-main.425.pdf
|
NAACL 2021
|
|||
Self-training Improves Pre-training for Natural Language Understanding
|
Jingfei Du, Edouard Grave, Beliz Gunel, Vishrav Chaudhary, Onur Celebi, Michael Auli, Veselin Stoyanov, Alexis Conneau
|
Unsupervised pre-training has led to much recent progress in natural language understanding. In this paper, we study self-training as another way to leverage unlabeled data through semi-supervised learning. To obtain additional data for a specific task, we introduce SentAugment, a data augmentation method which computes task-specific query embeddings from labeled data to retrieve sentences from a bank of billions of unlabeled sentences crawled from the web. Unlike previous semi-supervised methods, our approach does not require in-domain unlabeled data and is therefore more generally applicable. Experiments show that self-training is complementary to strong RoBERTa baselines on a variety of tasks. Our augmentation approach leads to scalable and effective self-training with improvements of up to 2.6% on standard text classification benchmarks. Finally, we also show strong gains on knowledge-distillation and few-shot learning.
|
https://aclanthology.org/2021.naacl-main.426
|
https://aclanthology.org/2021.naacl-main.426.pdf
|
NAACL 2021
|
|||
Supporting Clustering with Contrastive Learning
|
Dejiao Zhang, Feng Nan, Xiaokai Wei, Shang-Wen Li, Henghui Zhu, Kathleen McKeown, Ramesh Nallapati, Andrew O. Arnold, Bing Xiang
|
Unsupervised clustering aims at discovering the semantic categories of data according to some distance measured in the representation space. However, different categories often overlap with each other in the representation space at the beginning of the learning process, which poses a significant challenge for distance-based clustering in achieving good separation between different categories. To this end, we propose Supporting Clustering with Contrastive Learning (SCCL) – a novel framework to leverage contrastive learning to promote better separation. We assess the performance of SCCL on short text clustering and show that SCCL significantly advances the state-of-the-art results on most benchmark datasets with 3%-11% improvement on Accuracy and 4%-15% improvement on Normalized Mutual Information. Furthermore, our quantitative analysis demonstrates the effectiveness of SCCL in leveraging the strengths of both bottom-up instance discrimination and top-down clustering to achieve better intra-cluster and inter-cluster distances when evaluated with the ground truth cluster labels.
|
https://aclanthology.org/2021.naacl-main.427
|
https://aclanthology.org/2021.naacl-main.427.pdf
|
NAACL 2021
|
|||
TITA: A Two-stage Interaction and Topic-Aware Text Matching Model
|
Xingwu Sun, Yanling Cui, Hongyin Tang, Qiuyu Zhu, Fuzheng Zhang, Beihong Jin
|
In this paper, we focus on the problem of keyword and document matching by considering different relevance levels. In our recommendation system, different people follow different hot keywords with interest. We need to attach documents to each keyword and then distribute the documents to people who follow these keywords. The ideal documents should have the same topic with the keyword, which we call topic-aware relevance. In other words, topic-aware relevance documents are better than partially-relevance ones in this application. However, previous tasks never define topic-aware relevance clearly. To tackle this problem, we define a three-level relevance in keyword-document matching task: topic-aware relevance, partially-relevance and irrelevance. To capture the relevance between the short keyword and the document at above-mentioned three levels, we should not only combine the latent topic of the document with its deep neural representation, but also model complex interactions between the keyword and the document. To this end, we propose a Two-stage Interaction and Topic-Aware text matching model (TITA). In terms of “topic-aware”, we introduce neural topic model to analyze the topic of the document and then use it to further encode the document. In terms of “two-stage interaction”, we propose two successive stages to model complex interactions between the keyword and the document. Extensive experiments reveal that TITA outperforms other well-designed baselines and shows excellent performance in our recommendation system.
|
https://aclanthology.org/2021.naacl-main.428
|
https://aclanthology.org/2021.naacl-main.428.pdf
|
NAACL 2021
|
|||
Neural Quality Estimation with Multiple Hypotheses for Grammatical Error Correction
|
Zhenghao Liu, Xiaoyuan Yi, Maosong Sun, Liner Yang, Tat-Seng Chua
|
{'url': 'https://github.com/thunlp/VERNet', '#text': 'Grammatical Error Correction (GEC) aims to correct writing errors and help language learners improve their writing skills. However, existing GEC models tend to produce spurious corrections or fail to detect lots of errors. The quality estimation model is necessary to ensure learners get accurate GEC results and avoid misleading from poorly corrected sentences. Well-trained GEC models can generate several high-quality hypotheses through decoding, such as beam search, which provide valuable GEC evidence and can be used to evaluate GEC quality. However, existing models neglect the possible GEC evidence from different hypotheses. This paper presents the Neural Verification Network (VERNet) for GEC quality estimation with multiple hypotheses. VERNet establishes interactions among hypotheses with a reasoning graph and conducts two kinds of attention mechanisms to propagate GEC evidence to verify the quality of generated hypotheses. Our experiments on four GEC datasets show that VERNet achieves state-of-the-art grammatical error detection performance, achieves the best quality estimation results, and significantly improves GEC performance by reranking hypotheses. All data and source codes are available at .'}
|
https://aclanthology.org/2021.naacl-main.429
|
https://aclanthology.org/2021.naacl-main.429.pdf
|
NAACL 2021
|
|||
Neural Network Surgery: Injecting Data Patterns into Pre-trained Models with Minimal Instance-wise Side Effects
|
Zhiyuan Zhang, Xuancheng Ren, Qi Su, Xu Sun, Bin He
|
{'i': 'neural network surgery', 'tex-math': '10^{-5}', '#text': 'Side effects during neural network tuning are typically measured by overall accuracy changes. However, we find that even with similar overall accuracy, existing tuning methods result in non-negligible instance-wise side effects. Motivated by neuroscientific evidence and theoretical results, we demonstrate that side effects can be controlled by the number of changed parameters and thus, we propose to conduct by only modifying a limited number of parameters. Neural network surgery can be realized using diverse techniques and we investigate three lines of methods. Experimental results on representative tuning problems validate the effectiveness of the surgery approach. The dynamic selecting method achieves the best overall performance that not only satisfies the tuning goal but also induces fewer instance-wise side effects by changing only of the parameters.'}
|
https://aclanthology.org/2021.naacl-main.430
|
https://aclanthology.org/2021.naacl-main.430.pdf
|
NAACL 2021
|
|||
Discrete Argument Representation Learning for Interactive Argument Pair Identification
|
Lu Ji, Zhongyu Wei, Jing Li, Qi Zhang, Xuanjing Huang
|
In this paper, we focus on identifying interactive argument pairs from two posts with opposite stances to a certain topic. Considering opinions are exchanged from different perspectives of the discussing topic, we study the discrete representations for arguments to capture varying aspects in argumentation languages (e.g., the debate focus and the participant behavior). Moreover, we utilize hierarchical structure to model post-wise information incorporating contextual knowledge. Experimental results on the large-scale dataset collected from CMV show that our proposed framework can significantly outperform the competitive baselines. Further analyses reveal why our model yields superior performance and prove the usefulness of our learned representations.
|
https://aclanthology.org/2021.naacl-main.431
|
https://aclanthology.org/2021.naacl-main.431.pdf
|
NAACL 2021
|
|||
On Unifying Misinformation Detection
|
Nayeon Lee, Belinda Z. Li, Sinong Wang, Pascale Fung, Hao Ma, Wen-tau Yih, Madian Khabsa
|
In this paper, we introduce UnifiedM2, a general-purpose misinformation model that jointly models multiple domains of misinformation with a single, unified setup. The model is trained to handle four tasks: detecting news bias, clickbait, fake news, and verifying rumors. By grouping these tasks together, UnifiedM2 learns a richer representation of misinformation, which leads to state-of-the-art or comparable performance across all tasks. Furthermore, we demonstrate that UnifiedM2’s learned representation is helpful for few-shot learning of unseen misinformation tasks/datasets and the model’s generalizability to unseen events.
|
https://aclanthology.org/2021.naacl-main.432
|
https://aclanthology.org/2021.naacl-main.432.pdf
|
NAACL 2021
|
|||
Frustratingly Easy Edit-based Linguistic Steganography with a Masked Language Model
|
Honai Ueoka, Yugo Murawaki, Sadao Kurohashi
|
With advances in neural language models, the focus of linguistic steganography has shifted from edit-based approaches to generation-based ones. While the latter’s payload capacity is impressive, generating genuine-looking texts remains challenging. In this paper, we revisit edit-based linguistic steganography, with the idea that a masked language model offers an off-the-shelf solution. The proposed method eliminates painstaking rule construction and has a high payload capacity for an edit-based model. It is also shown to be more secure against automatic detection than a generation-based method while offering better control of the security/payload capacity trade-off.
|
https://aclanthology.org/2021.naacl-main.433
|
https://aclanthology.org/2021.naacl-main.433.pdf
|
NAACL 2021
|
|||
Few-Shot Text Classification with Triplet Networks, Data Augmentation, and Curriculum Learning
|
Jason Wei, Chengyu Huang, Soroush Vosoughi, Yu Cheng, Shiqi Xu
|
Few-shot text classification is a fundamental NLP task in which a model aims to classify text into a large number of categories, given only a few training examples per category. This paper explores data augmentation—a technique particularly suitable for training with limited data—for this few-shot, highly-multiclass text classification setting. On four diverse text classification tasks, we find that common data augmentation techniques can improve the performance of triplet networks by up to 3.0% on average. To further boost performance, we present a simple training strategy called curriculum data augmentation, which leverages curriculum learning by first training on only original examples and then introducing augmented data as training progresses. We explore a two-stage and a gradual schedule, and find that, compared with standard single-stage training, curriculum data augmentation trains faster, improves performance, and remains robust to high amounts of noising from augmentation.
|
https://aclanthology.org/2021.naacl-main.434
|
https://aclanthology.org/2021.naacl-main.434.pdf
|
NAACL 2021
|
|||
Do RNN States Encode Abstract Phonological Alternations?
|
Miikka Silfverberg, Francis Tyers, Garrett Nicolai, Mans Hulden
|
Sequence-to-sequence models have delivered impressive results in word formation tasks such as morphological inflection, often learning to model subtle morphophonological details with limited training data. Despite the performance, the opacity of neural models makes it difficult to determine whether complex generalizations are learned, or whether a kind of separate rote memorization of each morphophonological process takes place. To investigate whether complex alternations are simply memorized or whether there is some level of generalization across related sound changes in a sequence-to-sequence model, we perform several experiments on Finnish consonant gradation—a complex set of sound changes triggered in some words by certain suffixes. We find that our models often—though not always—encode 17 different consonant gradation processes in a handful of dimensions in the RNN. We also show that by scaling the activations in these dimensions we can control whether consonant gradation occurs and the direction of the gradation.
|
https://aclanthology.org/2021.naacl-main.435
|
https://aclanthology.org/2021.naacl-main.435.pdf
|
NAACL 2021
|
|||
Pre-training with Meta Learning for Chinese Word Segmentation
|
Zhen Ke, Liang Shi, Songtao Sun, Erli Meng, Bin Wang, Xipeng Qiu
|
Recent researches show that pre-trained models (PTMs) are beneficial to Chinese Word Segmentation (CWS). However, PTMs used in previous works usually adopt language modeling as pre-training tasks, lacking task-specific prior segmentation knowledge and ignoring the discrepancy between pre-training tasks and downstream CWS tasks. In this paper, we propose a CWS-specific pre-trained model MetaSeg, which employs a unified architecture and incorporates meta learning algorithm into a multi-criteria pre-training task. Empirical results show that MetaSeg could utilize common prior segmentation knowledge from different existing criteria and alleviate the discrepancy between pre-trained models and downstream CWS tasks. Besides, MetaSeg can achieve new state-of-the-art performance on twelve widely-used CWS datasets and significantly improve model performance in low-resource settings.
|
https://aclanthology.org/2021.naacl-main.436
|
https://aclanthology.org/2021.naacl-main.436.pdf
|
NAACL 2021
|
|||
Decompose, Fuse and Generate: A Formation-Informed Method for Chinese Definition Generation
|
Hua Zheng, Damai Dai, Lei Li, Tianyu Liu, Zhifang Sui, Baobao Chang, Yang Liu
|
In this paper, we tackle the task of Definition Generation (DG) in Chinese, which aims at automatically generating a definition for a word. Most existing methods take the source word as an indecomposable semantic unit. However, in parataxis languages like Chinese, word meanings can be composed using the word formation process, where a word (“桃花”, peach-blossom) is formed by formation components (“桃”, peach; “花”, flower) using a formation rule (Modifier-Head). Inspired by this process, we propose to enhance DG with word formation features. We build a formation-informed dataset, and propose a model DeFT, which Decomposes words into formation features, dynamically Fuses different features through a gating mechanism, and generaTes word definitions. Experimental results show that our method is both effective and robust.
|
https://aclanthology.org/2021.naacl-main.437
|
https://aclanthology.org/2021.naacl-main.437.pdf
|
NAACL 2021
|
|||
User-Generated Text Corpus for Evaluating Japanese Morphological Analysis and Lexical Normalization
|
Shohei Higashiyama, Masao Utiyama, Taro Watanabe, Eiichiro Sumita
|
Morphological analysis (MA) and lexical normalization (LN) are both important tasks for Japanese user-generated text (UGT). To evaluate and compare different MA/LN systems, we have constructed a publicly available Japanese UGT corpus. Our corpus comprises 929 sentences annotated with morphological and normalization information, along with category information we classified for frequent UGT-specific phenomena. Experiments on the corpus demonstrated the low performance of existing MA/LN methods for non-general words and non-standard forms, indicating that the corpus would be a challenging benchmark for further research on UGT.
|
https://aclanthology.org/2021.naacl-main.438
|
https://aclanthology.org/2021.naacl-main.438.pdf
|
NAACL 2021
|
|||
GPT Perdetry Test: Generating new meanings for new words
|
Nikolay Malkin, Sameera Lanka, Pranav Goel, Sudha Rao, Nebojsa Jojic
|
Human innovation in language, such as inventing new words, is a challenge for pretrained language models. We assess the ability of one large model, GPT-3, to process new words and decide on their meaning. We create a set of nonce words and prompt GPT-3 to generate their dictionary definitions. We find GPT-3 produces plausible definitions that align with human judgments. Moreover, GPT-3’s definitions are sometimes preferred to those invented by humans, signaling its intriguing ability not just to adapt, but to add to the evolving vocabulary of the English language.
|
https://aclanthology.org/2021.naacl-main.439
|
https://aclanthology.org/2021.naacl-main.439.pdf
|
NAACL 2021
|
|||
Universal Semantic Tagging for English and Mandarin Chinese
|
Wenxi Li, Yiyang Hou, Yajie Ye, Li Liang, Weiwei Sun
|
Universal Semantic Tagging aims to provide lightweight unified analysis for all languages at the word level. Though the proposed annotation scheme is conceptually promising, the feasibility is only examined in four Indo–European languages. This paper is concerned with extending the annotation scheme to handle Mandarin Chinese and empirically study the plausibility of unifying meaning representations for multiple languages. We discuss a set of language-specific semantic phenomena, propose new annotation specifications and build a richly annotated corpus. The corpus consists of 1100 English–Chinese parallel sentences, where compositional semantic analysis is available for English, and another 1000 Chinese sentences which has enriched syntactic analysis. By means of the new annotations, we also evaluate a series of neural tagging models to gauge how successful semantic tagging can be: accuracies of 92.7% and 94.6% are obtained for Chinese and English respectively. The English tagging performance is remarkably better than the state-of-the-art by 7.7%.
|
https://aclanthology.org/2021.naacl-main.440
|
https://aclanthology.org/2021.naacl-main.440.pdf
|
NAACL 2021
|
|||
ShadowGNN: Graph Projection Neural Network for Text-to-SQL Parser
|
Zhi Chen, Lu Chen, Yanbin Zhao, Ruisheng Cao, Zihan Xu, Su Zhu, Kai Yu
|
{'url': 'https://github.com/WowCZ/shadowgnn', '#text': 'Given a database schema, Text-to-SQL aims to translate a natural language question into the corresponding SQL query. Under the setup of cross-domain, traditional semantic parsing models struggle to adapt to unseen database schemas. To improve the model generalization capability for rare and unseen schemas, we propose a new architecture, ShadowGNN, which processes schemas at abstract and semantic levels. By ignoring names of semantic items in databases, abstract schemas are exploited in a well-designed graph projection neural network to obtain delexicalized representation of question and schema. Based on the domain-independent representations, a relation-aware transformer is utilized to further extract logical linking between question and schema. Finally, a SQL decoder with context-free grammar is applied. On the challenging Text-to-SQL benchmark Spider, empirical results show that ShadowGNN outperforms state-of-the-art models. When the annotated data is extremely limited (only 10% training set), ShadowGNN gets over absolute 5% performance gain, which shows its powerful generalization ability. Our implementation will be open-sourced at'}
|
https://aclanthology.org/2021.naacl-main.441
|
https://aclanthology.org/2021.naacl-main.441.pdf
|
NAACL 2021
|
|||
Contextualized and Generalized Sentence Representations by Contrastive Self-Supervised Learning: A Case Study on Discourse Relation Analysis
|
Hirokazu Kiyomaru, Sadao Kurohashi
|
We propose a method to learn contextualized and generalized sentence representations using contrastive self-supervised learning. In the proposed method, a model is given a text consisting of multiple sentences. One sentence is randomly selected as a target sentence. The model is trained to maximize the similarity between the representation of the target sentence with its context and that of the masked target sentence with the same context. Simultaneously, the model minimizes the similarity between the latter representation and the representation of a random sentence with the same context. We apply our method to discourse relation analysis in English and Japanese and show that it outperforms strong baseline methods based on BERT, XLNet, and RoBERTa.
|
https://aclanthology.org/2021.naacl-main.442
|
https://aclanthology.org/2021.naacl-main.442.pdf
|
NAACL 2021
|
|||
AMR Parsing with Action-Pointer Transformer
|
Jiawei Zhou, Tahira Naseem, Ramón Fernandez Astudillo, Radu Florian
|
Abstract Meaning Representation parsing is a sentence-to-graph prediction task where target nodes are not explicitly aligned to sentence tokens. However, since graph nodes are semantically based on one or more sentence tokens, implicit alignments can be derived. Transition-based parsers operate over the sentence from left to right, capturing this inductive bias via alignments at the cost of limited expressiveness. In this work, we propose a transition-based system that combines hard-attention over sentences with a target-side action pointer mechanism to decouple source tokens from node representations and address alignments. We model the transitions as well as the pointer mechanism through straightforward modifications within a single Transformer architecture. Parser state and graph structure information are efficiently encoded using attention heads. We show that our action-pointer approach leads to increased expressiveness and attains large gains (+1.6 points) against the best transition-based AMR parser in very similar conditions. While using no graph re-categorization, our single model yields the second best Smatch score on AMR 2.0 (81.8), which is further improved to 83.4 with silver data and ensemble decoding.
|
https://aclanthology.org/2021.naacl-main.443
|
https://aclanthology.org/2021.naacl-main.443.pdf
|
NAACL 2021
|
|||
NL-EDIT: Correcting Semantic Parse Errors through Natural Language Interaction
|
Ahmed Elgohary, Christopher Meek, Matthew Richardson, Adam Fourney, Gonzalo Ramos, Ahmed Hassan Awadallah
|
{'url': 'http://aka.ms/NLEdit', '#text': 'We study semantic parsing in an interactive setting in which users correct errors with natural language feedback. We present NL-EDIT, a model for interpreting natural language feedback in the interaction context to generate a sequence of edits that can be applied to the initial parse to correct its errors. We show that NL-EDIT can boost the accuracy of existing text-to-SQL parsers by up to 20% with only one turn of correction. We analyze the limitations of the model and discuss directions for improvement and evaluation. The code and datasets used in this paper are publicly available at .'}
|
https://aclanthology.org/2021.naacl-main.444
|
https://aclanthology.org/2021.naacl-main.444.pdf
|
NAACL 2021
|
|||
Unsupervised Concept Representation Learning for Length-Varying Text Similarity
|
Xuchao Zhang, Bo Zong, Wei Cheng, Jingchao Ni, Yanchi Liu, Haifeng Chen
|
Measuring document similarity plays an important role in natural language processing tasks. Most existing document similarity approaches suffer from the information gap caused by context and vocabulary mismatches when comparing varying-length texts. In this paper, we propose an unsupervised concept representation learning approach to address the above issues. Specifically, we propose a novel Concept Generation Network (CGNet) to learn concept representations from the perspective of the entire text corpus. Moreover, a concept-based document matching method is proposed to leverage advances in the recognition of local phrase features and corpus-level concept features. Extensive experiments on real-world data sets demonstrate that new method can achieve a considerable improvement in comparing length-varying texts. In particular, our model achieved 6.5% better F1 Score compared to the best of the baseline models for a concept-project benchmark dataset.
|
https://aclanthology.org/2021.naacl-main.445
|
https://aclanthology.org/2021.naacl-main.445.pdf
|
NAACL 2021
|
|||
Augmenting Knowledge-grounded Conversations with Sequential Knowledge Transition
|
Haolan Zhan, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Yongjun Bao, Yanyan Lan
|
Knowledge data are massive and widespread in the real-world, which can serve as good external sources to enrich conversations. However, in knowledge-grounded conversations, current models still lack the fine-grained control over knowledge selection and integration with dialogues, which finally leads to the knowledge-irrelevant response generation problems: 1) knowledge selection merely relies on the dialogue context, ignoring the inherent knowledge transitions along with conversation flows; 2) the models often over-fit during training, resulting with incoherent response by referring to unrelated tokens from specific knowledge content in the testing phase; 3) although response is generated upon the dialogue history and knowledge, the models often tend to overlook the selected knowledge, and hence generates knowledge-irrelevant response. To address these problems, we proposed to explicitly model the knowledge transition in sequential multi-turn conversations by abstracting knowledge into topic tags. Besides, to fully utilizing the selected knowledge in generative process, we propose pre-training a knowledge-aware response generator to pay more attention on the selected knowledge. In particular, a sequential knowledge transition model equipped with a pre-trained knowledge-aware response generator (SKT-KG) formulates the high-level knowledge transition and fully utilizes the limited knowledge data. Experimental results on both structured and unstructured knowledge-grounded dialogue benchmarks indicate that our model achieves better performance over baseline models.
|
https://aclanthology.org/2021.naacl-main.446
|
https://aclanthology.org/2021.naacl-main.446.pdf
|
NAACL 2021
|
|||
Adversarial Self-Supervised Learning for Out-of-Domain Detection
|
Zhiyuan Zeng, Keqing He, Yuanmeng Yan, Hong Xu, Weiran Xu
|
Detecting out-of-domain (OOD) intents is crucial for the deployed task-oriented dialogue system. Previous unsupervised OOD detection methods only extract discriminative features of different in-domain intents while supervised counterparts can directly distinguish OOD and in-domain intents but require extensive labeled OOD data. To combine the benefits of both types, we propose a self-supervised contrastive learning framework to model discriminative semantic features of both in-domain intents and OOD intents from unlabeled data. Besides, we introduce an adversarial augmentation neural module to improve the efficiency and robustness of contrastive learning. Experiments on two public benchmark datasets show that our method can consistently outperform the baselines with a statistically significant margin.
|
https://aclanthology.org/2021.naacl-main.447
|
https://aclanthology.org/2021.naacl-main.447.pdf
|
NAACL 2021
|
|||
Leveraging Slot Descriptions for Zero-Shot Cross-Domain Dialogue StateTracking
|
Zhaojiang Lin, Bing Liu, Seungwhan Moon, Paul Crook, Zhenpeng Zhou, Zhiguang Wang, Zhou Yu, Andrea Madotto, Eunjoon Cho, Rajen Subba
|
Zero-shot cross-domain dialogue state tracking (DST) enables us to handle unseen domains without the expense of collecting in-domain data. In this paper, we propose a slot descriptions enhanced generative approach for zero-shot cross-domain DST. Specifically, our model first encodes a dialogue context and a slot with a pre-trained self-attentive encoder, and generates slot value in auto-regressive manner. In addition, we incorporate Slot Type Informed Descriptions that capture the shared information of different slots to facilitates the cross-domain knowledge transfer. Experimental results on MultiWOZ shows that our model significantly improve existing state-of-the-art results in zero-shot cross-domain setting.
|
https://aclanthology.org/2021.naacl-main.448
|
https://aclanthology.org/2021.naacl-main.448.pdf
|
NAACL 2021
|
|||
Hierarchical Transformer for Task Oriented Dialog Systems
|
Bishal Santra, Potnuru Anusha, Pawan Goyal
|
Generative models for dialog systems have gained much interest because of the recent success of RNN and Transformer based models in tasks like question answering and summarization. Although the task of dialog response generation is generally seen as a sequence to sequence (Seq2Seq) problem, researchers in the past have found it challenging to train dialog systems using the standard Seq2Seq models. Therefore, to help the model learn meaningful utterance and conversation level features, Sordoni et al. (2015b), Serban et al. (2016) proposed Hierarchical RNN architecture, which was later adopted by several other RNN based dialog systems. With the transformer-based models dominating the seq2seq problems lately, the natural question to ask is the applicability of the notion of hierarchy in transformer-based dialog systems. In this paper, we propose a generalized framework for Hierarchical Transformer Encoders and show how a standard transformer can be morphed into any hierarchical encoder, including HRED and HIBERT like models, by using specially designed attention masks and positional encodings. We demonstrate that Hierarchical Encoding helps achieve better natural language understanding of the contexts in transformer-based models for task-oriented dialog systems through a wide range of experiments.
|
https://aclanthology.org/2021.naacl-main.449
|
https://aclanthology.org/2021.naacl-main.449.pdf
|
NAACL 2021
|
|||
Measuring the ‘I don’t know’ Problem through the Lens of Gricean Quantity
|
Huda Khayrallah, João Sedoc
|
We consider the intrinsic evaluation of neural generative dialog models through the lens of Grice’s Maxims of Conversation (1975). Based on the maxim of Quantity (be informative), we propose Relative Utterance Quantity (RUQ) to diagnose the ‘I don’t know’ problem, in which a dialog system produces generic responses. The linguistically motivated RUQ diagnostic compares the model score of a generic response to that of the reference response. We find that for reasonable baseline models, ‘I don’t know’ is preferred over the reference the majority of the time, but this can be reduced to less than 5% with hyperparameter tuning. RUQ allows for the direct analysis of the ‘I don’t know’ problem, which has been addressed but not analyzed by prior work.
|
https://aclanthology.org/2021.naacl-main.450
|
https://aclanthology.org/2021.naacl-main.450.pdf
|
NAACL 2021
|
|||
RTFE: A Recursive Temporal Fact Embedding Framework for Temporal Knowledge Graph Completion
|
Youri Xu, Haihong E, Meina Song, Wenyu Song, Xiaodong Lv, Wang Haotian, Yang Jinrui
|
Static knowledge graph (SKG) embedding (SKGE) has been studied intensively in the past years. Recently, temporal knowledge graph (TKG) embedding (TKGE) has emerged. In this paper, we propose a Recursive Temporal Fact Embedding (RTFE) framework to transplant SKGE models to TKGs and to enhance the performance of existing TKGE models for TKG completion. Different from previous work which ignores the continuity of states of TKG in time evolution, we treat the sequence of graphs as a Markov chain, which transitions from the previous state to the next state. RTFE takes the SKGE to initialize the embeddings of TKG. Then it recursively tracks the state transition of TKG by passing updated parameters/features between timestamps. Specifically, at each timestamp, we approximate the state transition as the gradient update process. Since RTFE learns each timestamp recursively, it can naturally transit to future timestamps. Experiments on five TKG datasets show the effectiveness of RTFE.
|
https://aclanthology.org/2021.naacl-main.451
|
https://aclanthology.org/2021.naacl-main.451.pdf
|
NAACL 2021
|
|||
Open Hierarchical Relation Extraction
|
Kai Zhang, Yuan Yao, Ruobing Xie, Xu Han, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun
|
Open relation extraction (OpenRE) aims to extract novel relation types from open-domain corpora, which plays an important role in completing the relation schemes of knowledge bases (KBs). Most OpenRE methods cast different relation types in isolation without considering their hierarchical dependency. We argue that OpenRE is inherently in close connection with relation hierarchies. To establish the bidirectional connections between OpenRE and relation hierarchy, we propose the task of open hierarchical relation extraction and present a novel OHRE framework for the task. We propose a dynamic hierarchical triplet objective and hierarchical curriculum training paradigm, to effectively integrate hierarchy information into relation representations for better novel relation extraction. We also present a top-down hierarchy expansion algorithm to add the extracted relations into existing hierarchies with reasonable interpretability. Comprehensive experiments show that OHRE outperforms state-of-the-art models by a large margin on both relation clustering and hierarchy expansion.
|
https://aclanthology.org/2021.naacl-main.452
|
https://aclanthology.org/2021.naacl-main.452.pdf
|
NAACL 2021
|
|||
Jointly Extracting Explicit and Implicit Relational Triples with Reasoning Pattern Enhanced Binary Pointer Network
|
Yubo Chen, Yunqi Zhang, Changran Hu, Yongfeng Huang
|
Relational triple extraction is a crucial task for knowledge graph construction. Existing methods mainly focused on explicit relational triples that are directly expressed, but usually suffer from ignoring implicit triples that lack explicit expressions. This will lead to serious incompleteness of the constructed knowledge graphs. Fortunately, other triples in the sentence provide supplementary information for discovering entity pairs that may have implicit relations. Also, the relation types between the implicitly connected entity pairs can be identified with relational reasoning patterns in the real world. In this paper, we propose a unified framework to jointly extract explicit and implicit relational triples. To explore entity pairs that may be implicitly connected by relations, we propose a binary pointer network to extract overlapping relational triples relevant to each word sequentially and retain the information of previously extracted triples in an external memory. To infer the relation types of implicit relational triples, we propose to introduce real-world relational reasoning patterns in our model and capture these patterns with a relation network. We conduct experiments on several benchmark datasets, and the results prove the validity of our method.
|
https://aclanthology.org/2021.naacl-main.453
|
https://aclanthology.org/2021.naacl-main.453.pdf
|
NAACL 2021
|
|||
Multi-Grained Knowledge Distillation for Named Entity Recognition
|
Xuan Zhou, Xiao Zhang, Chenyang Tao, Junya Chen, Bing Xu, Wei Wang, Jing Xiao
|
Although pre-trained big models (e.g., BERT, ERNIE, XLNet, GPT3 etc.) have delivered top performance in Seq2seq modeling, their deployments in real-world applications are often hindered by the excessive computations and memory demand involved. For many applications, including named entity recognition (NER), matching the state-of-the-art result under budget has attracted considerable attention. Drawing power from the recent advance in knowledge distillation (KD), this work presents a novel distillation scheme to efficiently transfer the knowledge learned from big models to their more affordable counterpart. Our solution highlights the construction of surrogate labels through the k-best Viterbi algorithm to distill knowledge from the teacher model. To maximally assimilate knowledge into the student model, we propose a multi-grained distillation scheme, which integrates cross entropy involved in conditional random field (CRF) and fuzzy learning. To validate the effectiveness of our proposal, we conducted a comprehensive evaluation on five NER benchmarks, reporting cross-the-board performance gains relative to competing prior-arts. We further discuss ablation results to dissect our gains.
|
https://aclanthology.org/2021.naacl-main.454
|
https://aclanthology.org/2021.naacl-main.454.pdf
|
NAACL 2021
|
|||
SGG: Learning to Select, Guide, and Generate for Keyphrase Generation
|
Jing Zhao, Junwei Bao, Yifan Wang, Youzheng Wu, Xiaodong He, Bowen Zhou
|
Keyphrases, that concisely summarize the high-level topics discussed in a document, can be categorized into present keyphrase which explicitly appears in the source text and absent keyphrase which does not match any contiguous subsequence but is highly semantically related to the source. Most existing keyphrase generation approaches synchronously generate present and absent keyphrases without explicitly distinguishing these two categories. In this paper, a Select-Guide-Generate (SGG) approach is proposed to deal with present and absent keyphrases generation separately with different mechanisms. Specifically, SGG is a hierarchical neural network which consists of a pointing-based selector at low layer concentrated on present keyphrase generation, a selection-guided generator at high layer dedicated to absent keyphrase generation, and a guider in the middle to transfer information from selector to generator. Experimental results on four keyphrase generation benchmarks demonstrate the effectiveness of our model, which significantly outperforms the strong baselines for both present and absent keyphrases generation. Furthermore, we extend SGG to a title generation task which indicates its extensibility in natural language generation tasks.
|
https://aclanthology.org/2021.naacl-main.455
|
https://aclanthology.org/2021.naacl-main.455.pdf
|
NAACL 2021
|
|||
Towards Sentiment and Emotion aided Multi-modal Speech Act Classification in Twitter
|
Tulika Saha, Apoorva Upadhyaya, Sriparna Saha, Pushpak Bhattacharyya
|
{'i': 'EmoTA', '#text': 'Speech Act Classification determining the communicative intent of an utterance has been investigated widely over the years as a standalone task. This holds true for discussion in any fora including social media platform such as Twitter. But the emotional state of the tweeter which has a considerable effect on the communication has not received the attention it deserves. Closely related to emotion is sentiment, and understanding of one helps understand the other. In this work, we firstly create a new multi-modal, emotion-TA (‘TA’ means tweet act, i.e., speech act in Twitter) dataset called collected from open-source Twitter dataset. We propose a Dyadic Attention Mechanism (DAM) based multi-modal, adversarial multi-tasking framework. DAM incorporates intra-modal and inter-modal attention to fuse multiple modalities and learns generalized features across all the tasks. Experimental results indicate that the proposed framework boosts the performance of the primary task, i.e., TA classification (TAC) by benefitting from the two secondary tasks, i.e., Sentiment and Emotion Analysis compared to its uni-modal and single task TAC (tweet act classification) variants.'}
|
https://aclanthology.org/2021.naacl-main.456
|
https://aclanthology.org/2021.naacl-main.456.pdf
|
NAACL 2021
|
|||
Generative Imagination Elevates Machine Translation
|
Quanyu Long, Mingxuan Wang, Lei Li
|
There are common semantics shared across text and images. Given a sentence in a source language, whether depicting the visual scene helps translation into a target language? Existing multimodal neural machine translation methods (MNMT) require triplets of bilingual sentence - image for training and tuples of source sentence - image for inference. In this paper, we propose ImagiT, a novel machine translation method via visual imagination. ImagiT first learns to generate visual representation from the source sentence, and then utilizes both source sentence and the “imagined representation” to produce a target translation. Unlike previous methods, it only needs the source sentence at the inference time. Experiments demonstrate that ImagiT benefits from visual imagination and significantly outperforms the text-only neural machine translation baselines. Further analysis reveals that the imagination process in ImagiT helps fill in missing information when performing the degradation strategy.
|
https://aclanthology.org/2021.naacl-main.457
|
https://aclanthology.org/2021.naacl-main.457.pdf
|
NAACL 2021
|
|||
Non-Autoregressive Translation by Learning Target Categorical Codes
|
Yu Bao, Shujian Huang, Tong Xiao, Dongqi Wang, Xinyu Dai, Jiajun Chen
|
Non-autoregressive Transformer is a promising text generation model. However, current non-autoregressive models still fall behind their autoregressive counterparts in translation quality. We attribute this accuracy gap to the lack of dependency modeling among decoder inputs. In this paper, we propose CNAT, which learns implicitly categorical codes as latent variables into the non-autoregressive decoding. The interaction among these categorical codes remedies the missing dependencies and improves the model capacity. Experiment results show that our model achieves comparable or better performance in machine translation tasks than several strong baselines.
|
https://aclanthology.org/2021.naacl-main.458
|
https://aclanthology.org/2021.naacl-main.458.pdf
|
NAACL 2021
|
|||
Training Data Augmentation for Code-Mixed Translation
|
Abhirut Gupta, Aditya Vavre, Sunita Sarawagi
|
Machine translation of user-generated code-mixed inputs to English is of crucial importance in applications like web search and targeted advertising. We address the scarcity of parallel training data for training such models by designing a strategy of converting existing non-code-mixed parallel data sources to code-mixed parallel data. We present an m-BERT based procedure whose core learnable component is a ternary sequence labeling model, that can be trained with a limited code-mixed corpus alone. We show a 5.8 point increase in BLEU on heavily code-mixed sentences by training a translation model using our data augmentation strategy on an Hindi-English code-mixed translation task.
|
https://aclanthology.org/2021.naacl-main.459
|
https://aclanthology.org/2021.naacl-main.459.pdf
|
NAACL 2021
|
|||
Rethinking Perturbations in Encoder-Decoders for Fast Training
|
Sho Takase, Shun Kiyono
|
We often use perturbations to regularize neural models. For neural encoder-decoders, previous studies applied the scheduled sampling (Bengio et al., 2015) and adversarial perturbations (Sato et al., 2019) as perturbations but these methods require considerable computational time. Thus, this study addresses the question of whether these approaches are efficient enough for training time. We compare several perturbations in sequence-to-sequence problems with respect to computational time. Experimental results show that the simple techniques such as word dropout (Gal and Ghahramani, 2016) and random replacement of input tokens achieve comparable (or better) scores to the recently proposed perturbations, even though these simple methods are faster.
|
https://aclanthology.org/2021.naacl-main.460
|
https://aclanthology.org/2021.naacl-main.460.pdf
|
NAACL 2021
|
|||
Context-aware Decoder for Neural Machine Translation using a Target-side Document-Level Language Model
|
Amane Sugiyama, Naoki Yoshinaga
|
Although many end-to-end context-aware neural machine translation models have been proposed to incorporate inter-sentential contexts in translation, these models can be trained only in domains where parallel documents with sentential alignments exist. We therefore present a simple method to perform context-aware decoding with any pre-trained sentence-level translation model by using a document-level language model. Our context-aware decoder is built upon sentence-level parallel data and target-side document-level monolingual data. From a theoretical viewpoint, our core contribution is the novel representation of contextual information using point-wise mutual information between context and the current sentence. We demonstrate the effectiveness of our method on English to Russian translation, by evaluating with BLEU and contrastive tests for context-aware translation.
|
https://aclanthology.org/2021.naacl-main.461
|
https://aclanthology.org/2021.naacl-main.461.pdf
|
NAACL 2021
|
|||
Machine Translated Text Detection Through Text Similarity with Round-Trip Translation
|
Hoang-Quoc Nguyen-Son, Tran Thao, Seira Hidano, Ishita Gupta, Shinsaku Kiyomoto
|
Translated texts have been used for malicious purposes, i.e., plagiarism or fake reviews. Existing detectors have been built around a specific translator (e.g., Google) but fail to detect a translated text from a strange translator. If we use the same translator, the translated text is similar to its round-trip translation, which is when text is translated into another language and translated back into the original language. However, a round-trip translated text is significantly different from the original text or a translated text using a strange translator. Hence, we propose a detector using text similarity with round-trip translation (TSRT). TSRT achieves 86.9% accuracy in detecting a translated text from a strange translator. It outperforms existing detectors (77.9%) and human recognition (53.3%).
|
https://aclanthology.org/2021.naacl-main.462
|
https://aclanthology.org/2021.naacl-main.462.pdf
|
NAACL 2021
|
|||
TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference
|
Deming Ye, Yankai Lin, Yufei Huang, Maosong Sun
|
{'url': 'https://github.com/thunlp/TR-BERT', '#text': 'Existing pre-trained language models (PLMs) are often computationally expensive in inference, making them impractical in various resource-limited real-world applications. To address this issue, we propose a dynamic token reduction approach to accelerate PLMs’ inference, named TR-BERT, which could flexibly adapt the layer number of each token in inference to avoid redundant calculation. Specially, TR-BERT formulates the token reduction process as a multi-step token selection problem and automatically learns the selection strategy via reinforcement learning. The experimental results on several downstream NLP tasks show that TR-BERT is able to speed up BERT by 2-5 times to satisfy various performance demands. Moreover, TR-BERT can also achieve better performance with less computation in a suite of long-text tasks since its token-level layer number adaption greatly accelerates the self-attention operation in PLMs. The source code and experiment details of this paper can be obtained from .'}
|
https://aclanthology.org/2021.naacl-main.463
|
https://aclanthology.org/2021.naacl-main.463.pdf
|
NAACL 2021
|
|||
Breadth First Reasoning Graph for Multi-hop Question Answering
|
Yongjie Huang, Meng Yang
|
Recently Graph Neural Network (GNN) has been used as a promising tool in multi-hop question answering task. However, the unnecessary updations and simple edge constructions prevent an accurate answer span extraction in a more direct and interpretable way. In this paper, we propose a novel model of Breadth First Reasoning Graph (BFR-Graph), which presents a new message passing way that better conforms to the reasoning process. In BFR-Graph, the reasoning message is required to start from the question node and pass to the next sentences node hop by hop until all the edges have been passed, which can effectively prevent each node from over-smoothing or being updated multiple times unnecessarily. To introduce more semantics, we also define the reasoning graph as a weighted graph with considering the number of co-occurrence entities and the distance between sentences. Then we present a more direct and interpretable way to aggregate scores from different levels of granularity based on the GNN. On HotpotQA leaderboard, the proposed BFR-Graph achieves state-of-the-art on answer span prediction.
|
https://aclanthology.org/2021.naacl-main.464
|
https://aclanthology.org/2021.naacl-main.464.pdf
|
NAACL 2021
|
|||
Improving Zero-Shot Cross-lingual Transfer for Multilingual Question Answering over Knowledge Graph
|
Yucheng Zhou, Xiubo Geng, Tao Shen, Wenqiang Zhang, Daxin Jiang
|
Multilingual question answering over knowledge graph (KGQA) aims to derive answers from a knowledge graph (KG) for questions in multiple languages. To be widely applicable, we focus on its zero-shot transfer setting. That is, we can only access training data in a high-resource language, while need to answer multilingual questions without any labeled data in target languages. A straightforward approach is resorting to pre-trained multilingual models (e.g., mBERT) for cross-lingual transfer, but there is a still significant gap of KGQA performance between source and target languages. In this paper, we exploit unsupervised bilingual lexicon induction (BLI) to map training questions in source language into those in target language as augmented training data, which circumvents language inconsistency between training and inference. Furthermore, we propose an adversarial learning strategy to alleviate syntax-disorder of the augmented data, making the model incline to both language- and syntax-independence. Consequently, our model narrows the gap in zero-shot cross-lingual transfer. Experiments on two multilingual KGQA datasets with 11 zero-resource languages verify its effectiveness.
|
https://aclanthology.org/2021.naacl-main.465
|
https://aclanthology.org/2021.naacl-main.465.pdf
|
NAACL 2021
|
|||
RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering
|
Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, Haifeng Wang
|
In open-domain question answering, dense passage retrieval has become a new paradigm to retrieve relevant passages for finding answers. Typically, the dual-encoder architecture is adopted to learn dense representations of questions and passages for semantic matching. However, it is difficult to effectively train a dual-encoder due to the challenges including the discrepancy between training and inference, the existence of unlabeled positives and limited training data. To address these challenges, we propose an optimized training approach, called RocketQA, to improving dense passage retrieval. We make three major technical contributions in RocketQA, namely cross-batch negatives, denoised hard negatives and data augmentation. The experiment results show that RocketQA significantly outperforms previous state-of-the-art models on both MSMARCO and Natural Questions. We also conduct extensive experiments to examine the effectiveness of the three strategies in RocketQA. Besides, we demonstrate that the performance of end-to-end QA can be improved based on our RocketQA retriever.
|
https://aclanthology.org/2021.naacl-main.466
|
https://aclanthology.org/2021.naacl-main.466.pdf
|
NAACL 2021
|
|||
DAGN: Discourse-Aware Graph Network for Logical Reasoning
|
Yinya Huang, Meng Fang, Yu Cao, Liwei Wang, Xiaodan Liang
|
{'url': 'https://github.com/Eleanor-H/DAGN', '#text': 'Recent QA with logical reasoning questions requires passage-level relations among the sentences. However, current approaches still focus on sentence-level relations interacting among tokens. In this work, we explore aggregating passage-level clues for solving logical reasoning QA by using discourse-based information. We propose a discourse-aware graph network (DAGN) that reasons relying on the discourse structure of the texts. The model encodes discourse information as a graph with elementary discourse units (EDUs) and discourse relations, and learns the discourse-aware features via a graph network for downstream QA tasks. Experiments are conducted on two logical reasoning QA datasets, ReClor and LogiQA, and our proposed DAGN achieves competitive results. The source code is available at .'}
|
https://aclanthology.org/2021.naacl-main.467
|
https://aclanthology.org/2021.naacl-main.467.pdf
|
NAACL 2021
|
|||
Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering
|
Sohee Yang, Minjoon Seo
|
In open-domain question answering (QA), retrieve-and-read mechanism has the inherent benefit of interpretability and the easiness of adding, removing, or editing knowledge compared to the parametric approaches of closed-book QA models. However, it is also known to suffer from its large storage footprint due to its document corpus and index. Here, we discuss several orthogonal strategies to drastically reduce the footprint of a retrieve-and-read open-domain QA system by up to 160x. Our results indicate that retrieve-and-read can be a viable option even in a highly constrained serving environment such as edge devices, as we show that it can achieve better accuracy than a purely parametric model with comparable docker-level system size.
|
https://aclanthology.org/2021.naacl-main.468
|
https://aclanthology.org/2021.naacl-main.468.pdf
|
NAACL 2021
|
|||
Unsupervised Multi-hop Question Answering by Question Generation
|
Liangming Pan, Wenhu Chen, Wenhan Xiong, Min-Yen Kan, William Yang Wang
|
{'url': 'https://github.com/teacherpeterpan/Unsupervised-Multi-hop-QA', '#text': 'Obtaining training data for multi-hop question answering (QA) is time-consuming and resource-intensive. We explore the possibility to train a well-performed multi-hop QA model without referencing any human-labeled multi-hop question-answer pairs, i.e., unsupervised multi-hop QA. We propose MQA-QG, an unsupervised framework that can generate human-like multi-hop training data from both homogeneous and heterogeneous data sources. MQA-QG generates questions by first selecting/generating relevant information from each data source and then integrating the multiple information to form a multi-hop question. Using only generated training data, we can train a competent multi-hop QA which achieves 61% and 83% of the supervised learning performance for the HybridQA and the HotpotQA dataset, respectively. We also show that pretraining the QA system with the generated data would greatly reduce the demand for human-annotated training data. Our codes are publicly available at .'}
|
https://aclanthology.org/2021.naacl-main.469
|
https://aclanthology.org/2021.naacl-main.469.pdf
|
NAACL 2021
|
|||
Sliding Selector Network with Dynamic Memory for Extractive Summarization of Long Documents
|
Peng Cui, Le Hu
|
Neural-based summarization models suffer from the length limitation of text encoder. Long documents have to been truncated before they are sent to the model, which results in huge loss of summary-relevant contents. To address this issue, we propose the sliding selector network with dynamic memory for extractive summarization of long-form documents, which employs a sliding window to extract summary sentences segment by segment. Moreover, we adopt memory mechanism to preserve and update the history information dynamically, allowing the semantic flow across different windows. Experimental results on two large-scale datasets that consist of scientific papers demonstrate that our model substantially outperforms previous state-of-the-art models. Besides, we perform qualitative and quantitative investigations on how our model works and where the performance gain comes from.
|
https://aclanthology.org/2021.naacl-main.470
|
https://aclanthology.org/2021.naacl-main.470.pdf
|
NAACL 2021
|
|||
AdaptSum: Towards Low-Resource Domain Adaptation for Abstractive Summarization
|
Tiezheng Yu, Zihan Liu, Pascale Fung
|
State-of-the-art abstractive summarization models generally rely on extensive labeled data, which lowers their generalization ability on domains where such data are not available. In this paper, we present a study of domain adaptation for the abstractive summarization task across six diverse target domains in a low-resource setting. Specifically, we investigate the second phase of pre-training on large-scale generative models under three different settings: 1) source domain pre-training; 2) domain-adaptive pre-training; and 3) task-adaptive pre-training. Experiments show that the effectiveness of pre-training is correlated with the similarity between the pre-training data and the target domain task. Moreover, we find that continuing pre-training could lead to the pre-trained model’s catastrophic forgetting, and a learning method with less forgetting can alleviate this issue. Furthermore, results illustrate that a huge gap still exists between the low-resource and high-resource settings, which highlights the need for more advanced domain adaptation methods for the abstractive summarization task.
|
https://aclanthology.org/2021.naacl-main.471
|
https://aclanthology.org/2021.naacl-main.471.pdf
|
NAACL 2021
|
|||
QMSum: A New Benchmark for Query-based Multi-domain Meeting Summarization
|
Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, Dragomir Radev
|
{'url': 'https://github.com/Yale-LILY/QMSum', '#text': 'Meetings are a key component of human collaboration. As increasing numbers of meetings are recorded and transcribed, meeting summaries have become essential to remind those who may or may not have attended the meetings about the key decisions made and the tasks to be completed. However, it is hard to create a single short summary that covers all the content of a long meeting involving multiple people and topics. In order to satisfy the needs of different types of users, we define a new query-based multi-domain meeting summarization task, where models have to select and summarize relevant spans of meetings in response to a query, and we introduce QMSum, a new benchmark for this task. QMSum consists of 1,808 query-summary pairs over 232 meetings in multiple domains. Besides, we investigate a locate-then-summarize method and evaluate a set of strong summarization baselines on the task. Experimental results and manual analysis reveal that QMSum presents significant challenges in long meeting summarization for future research. Dataset is available at .'}
|
https://aclanthology.org/2021.naacl-main.472
|
https://aclanthology.org/2021.naacl-main.472.pdf
|
NAACL 2021
|
|||
MM-AVS: A Full-Scale Dataset for Multi-modal Summarization
|
Xiyan Fu, Jun Wang, Zhenglu Yang
|
Multimodal summarization becomes increasingly significant as it is the basis for question answering, Web search, and many other downstream tasks. However, its learning materials have been lacking a holistic organization by integrating resources from various modalities, thereby lagging behind the research progress of this field. In this study, we release a full-scale multimodal dataset comprehensively gathering documents, summaries, images, captions, videos, audios, transcripts, and titles in English from CNN and Daily Mail. To our best knowledge, this is the first collection that spans all modalities and nearly comprises all types of materials available in this community. In addition, we devise a baseline model based on the novel dataset, which employs a newly proposed Jump-Attention mechanism based on transcripts. The experimental results validate the important assistance role of the external information for multimodal summarization.
|
https://aclanthology.org/2021.naacl-main.473
|
https://aclanthology.org/2021.naacl-main.473.pdf
|
NAACL 2021
|
|||
MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization
|
Chenguang Zhu, Yang Liu, Jie Mei, Michael Zeng
|
This paper introduces MediaSum, a large-scale media interview dataset consisting of 463.6K transcripts with abstractive summaries. To create this dataset, we collect interview transcripts from NPR and CNN and employ the overview and topic descriptions as summaries. Compared with existing public corpora for dialogue summarization, our dataset is an order of magnitude larger and contains complex multi-party conversations from multiple domains. We conduct statistical analysis to demonstrate the unique positional bias exhibited in the transcripts of televised and radioed interviews. We also show that MediaSum can be used in transfer learning to improve a model’s performance on other dialogue summarization tasks.
|
https://aclanthology.org/2021.naacl-main.474
|
https://aclanthology.org/2021.naacl-main.474.pdf
|
NAACL 2021
|
|||
Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection
|
Sihao Chen, Fan Zhang, Kazoo Sone, Dan Roth
|
Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context. To address the issue, we study contrast candidate generation and selection as a model-agnostic post-processing technique to correct the extrinsic hallucinations (i.e. information not present in the source text) in unfaithful summaries. We learn a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types from the source document. This model is then used to select the best candidate as the final output summary. Our experiments and analysis across a number of neural summarization systems show that our proposed method is effective in identifying and correcting extrinsic hallucinations. We analyze the typical hallucination phenomenon by different types of neural summarization systems, in hope to provide insights for future work on the direction.
|
https://aclanthology.org/2021.naacl-main.475
|
https://aclanthology.org/2021.naacl-main.475.pdf
|
NAACL 2021
|
|||
Inference Time Style Control for Summarization
|
Shuyang Cao, Lu Wang
|
How to generate summaries of different styles without requiring corpora in the target styles, or training separate models? We present two novel methods that can be deployed during summary decoding on any pre-trained Transformer-based summarization model. (1) Decoder state adjustment instantly modifies decoder final states with externally trained style scorers, to iteratively refine the output against a target style. (2) Word unit prediction constrains the word usage to impose strong lexical control during generation. In experiments of summarizing with simplicity control, automatic evaluation and human judges both find our models producing outputs in simpler languages while still informative. We also generate news headlines with various ideological leanings, which can be distinguished by humans with a reasonable probability.
|
https://aclanthology.org/2021.naacl-main.476
|
https://aclanthology.org/2021.naacl-main.476.pdf
|
NAACL 2021
|
|||
ReinforceBug: A Framework to Generate Adversarial Textual Examples
|
Bushra Sabir, Muhammad Ali Babar, Raj Gaire
|
Adversarial Examples (AEs) generated by perturbingining examples are useful in improving the robustness of Deep Learning (DL) based models. Most prior works generate AEs that are either unconscionable due to lexical errors or semantically and functionally deviant from original examples. In this paper, we present ReinforceBug, a reinforcement learning framework, that learns a policy that is transferable on unseen datasets and generates utility-preserving and transferable (on other models) AEs. Our experiments show that ReinforceBug is on average 10% more successful as compared to the state-of the-art attack TextFooler. Moreover, the target models have on average 73.64% confidence in wrong prediction, the generated AEs preserve the functional equivalence and semantic similarity (83.38%) to their original counterparts, and are transferable on other models with an average success rate of 46%
|
https://aclanthology.org/2021.naacl-main.477
|
https://aclanthology.org/2021.naacl-main.477.pdf
|
NAACL 2021
|
|||
PhoNLP: A joint multi-task learning model for Vietnamese part-of-speech tagging, named entity recognition and dependency parsing
|
Linh The Nguyen, Dat Quoc Nguyen
|
{'url': 'https://github.com/VinAIResearch/PhoNLP', '#text': 'We present the first multi-task learning model – named PhoNLP – for joint Vietnamese part-of-speech (POS) tagging, named entity recognition (NER) and dependency parsing. Experiments on Vietnamese benchmark datasets show that PhoNLP produces state-of-the-art results, outperforming a single-task learning approach that fine-tunes the pre-trained Vietnamese language model PhoBERT (Nguyen and Nguyen, 2020) for each task independently. We publicly release PhoNLP as an open-source toolkit under the Apache License 2.0. Although we specify PhoNLP for Vietnamese, our PhoNLP training and evaluation command scripts in fact can directly work for other languages that have a pre-trained BERT-based language model and gold annotated corpora available for the three tasks of POS tagging, NER and dependency parsing. We hope that PhoNLP can serve as a strong baseline and useful toolkit for future NLP research and applications to not only Vietnamese but also the other languages. Our PhoNLP is available at'}
|
https://aclanthology.org/2021.naacl-demos.1
|
https://aclanthology.org/2021.naacl-demos.1.pdf
|
NAACL 2021
|
|||
Machine-Assisted Script Curation
|
Manuel Ciosici, Joseph Cummings, Mitchell DeHaven, Alex Hedges, Yash Kankanampati, Dong-Ho Lee, Ralph Weischedel, Marjorie Freedman
|
We describe Machine-Aided Script Curator (MASC), a system for human-machine collaborative script authoring. Scripts produced with MASC include (1) English descriptions of sub-events that comprise a larger, complex event; (2) event types for each of those events; (3) a record of entities expected to participate in multiple sub-events; and (4) temporal sequencing between the sub-events. MASC automates portions of the script creation process with suggestions for event types, links to Wikidata, and sub-events that may have been forgotten. We illustrate how these automations are useful to the script writer with a few case-study scripts.
|
https://aclanthology.org/2021.naacl-demos.2
|
https://aclanthology.org/2021.naacl-demos.2.pdf
|
NAACL 2021
|
|||
NAMER: A Node-Based Multitasking Framework for Multi-Hop Knowledge Base Question Answering
|
Minhao Zhang, Ruoyu Zhang, Lei Zou, Yinnian Lin, Sen Hu
|
{'url': ['https://github.com/ridiculouz/CKBQA', 'http://kbqademo.gstore.cn', 'https://youtu.be/yetnVye_hg4'], '#text': 'We present NAMER, an open-domain Chinese knowledge base question answering system based on a novel node-based framework that better grasps the structural mapping between questions and KB queries by aligning the nodes in a query with their corresponding mentions in question. Equipped with techniques including data augmentation and multitasking, we show that the proposed framework outperforms the previous SoTA on CCKS CKBQA dataset. Moreover, we develop a novel data annotation strategy that facilitates the node-to-mention alignment, a dataset () with such strategy is also published to promote further research. An online demo of NAMER () is provided to visualize our framework and supply extra information for users, a video illustration () of NAMER is also available.'}
|
https://aclanthology.org/2021.naacl-demos.3
|
https://aclanthology.org/2021.naacl-demos.3.pdf
|
NAACL 2021
|
|||
DiSCoL: Toward Engaging Dialogue Systems through Conversational Line Guided Response Generation
|
Sarik Ghazarian, Zixi Liu, Tuhin Chakrabarty, Xuezhe Ma, Aram Galstyan, Nanyun Peng
|
{'b': ['DiSCoL', 'Di', 'S', 'Co', 'L', 'convlines'], 'i': 'control', '#text': 'Having engaging and informative conversations with users is the utmost goal for open-domain conversational systems. Recent advances in transformer-based language models and their applications to dialogue systems have succeeded to generate fluent and human-like responses. However, they still lack control over the generation process towards producing contentful responses and achieving engaging conversations. To achieve this goal, we present (alogue ystems through versational ine guided response generation). DiSCoL is an open-domain dialogue system that leverages conversational lines (briefly ) as controllable and informative content-planning elements to guide the generation model produce engaging and informative responses. Two primary modules in DiSCoL’s pipeline are conditional generators trained for 1) predicting relevant and informative convlines for dialogue contexts and 2) generating high-quality responses conditioned on the predicted convlines. Users can also change the returned convlines to the direction of the conversations towards topics that are more interesting for them. Through automatic and human evaluations, we demonstrate the efficiency of the convlines in producing engaging conversations.'}
|
https://aclanthology.org/2021.naacl-demos.4
|
https://aclanthology.org/2021.naacl-demos.4.pdf
|
NAACL 2021
|
|||
FITAnnotator: A Flexible and Intelligent Text Annotation System
|
Yanzeng Li, Bowen Yu, Li Quangang, Tingwen Liu
|
In this paper, we introduce FITAnnotator, a generic web-based tool for efficient text annotation. Benefiting from the fully modular architecture design, FITAnnotator provides a systematic solution for the annotation of a variety of natural language processing tasks, including classification, sequence tagging and semantic role annotation, regardless of the language. Three kinds of interfaces are developed to annotate instances, evaluate annotation quality and manage the annotation task for annotators, reviewers and managers, respectively. FITAnnotator also gives intelligent annotations by introducing task-specific assistant to support and guide the annotators based on active learning and incremental learning strategies. This assistant is able to effectively update from the annotator feedbacks and easily handle the incremental labeling scenarios.
|
https://aclanthology.org/2021.naacl-demos.5
|
https://aclanthology.org/2021.naacl-demos.5.pdf
|
NAACL 2021
|
|||
Robustness Gym: Unifying the NLP Evaluation Landscape
|
Karan Goel, Nazneen Fatema Rajani, Jesse Vig, Zachary Taschdjian, Mohit Bansal, Christopher Ré
|
Despite impressive performance on standard benchmarks, natural language processing (NLP) models are often brittle when deployed in real-world systems. In this work, we identify challenges with evaluating NLP systems and propose a solution in the form of Robustness Gym (RG), a simple and extensible evaluation toolkit that unifies 4 standard evaluation paradigms: subpopulations, transformations, evaluation sets, and adversarial attacks. By providing a common platform for evaluation, RG enables practitioners to compare results from disparate evaluation paradigms with a single click, and to easily develop and share novel evaluation methods using a built-in set of abstractions. RG is under active development and we welcome feedback & contributions from the community.
|
https://aclanthology.org/2021.naacl-demos.6
|
https://aclanthology.org/2021.naacl-demos.6.pdf
|
NAACL 2021
|
|||
EventPlus: A Temporal Event Understanding Pipeline
|
Mingyu Derek Ma, Jiao Sun, Mu Yang, Kung-Hsiang Huang, Nuan Wen, Shikhar Singh, Rujun Han, Nanyun Peng
|
We present EventPlus, a temporal event understanding pipeline that integrates various state-of-the-art event understanding components including event trigger and type detection, event argument detection, event duration and temporal relation extraction. Event information, especially event temporal knowledge, is a type of common sense knowledge that helps people understand how stories evolve and provides predictive hints for future events. EventPlus as the first comprehensive temporal event understanding pipeline provides a convenient tool for users to quickly obtain annotations about events and their temporal information for any user-provided document. Furthermore, we show EventPlus can be easily adapted to other domains (e.g., biomedical domain). We make EventPlus publicly available to facilitate event-related information extraction and downstream applications.
|
https://aclanthology.org/2021.naacl-demos.7
|
https://aclanthology.org/2021.naacl-demos.7.pdf
|
NAACL 2021
|
|||
COVID-19 Literature Knowledge Graph Construction and Drug Repurposing Report Generation
|
Qingyun Wang, Manling Li, Xuan Wang, Nikolaus Parulian, Guangxing Han, Jiawei Ma, Jingxuan Tu, Ying Lin, Ranran Haoran Zhang, Weili Liu, Aabhas Chauhan, Yingjun Guan, Bangzheng Li, Ruisong Li, Xiangchen Song, Yi Fung, Heng Ji, Jiawei Han, Shih-Fu Chang, James Pustejovsky, Jasmine Rah, David Liem, Ahmed ELsayed, Martha Palmer, Clare Voss, Cynthia Schneider, Boyan Onyshkevych
|
To combat COVID-19, both clinicians and scientists need to digest the vast amount of relevant biomedical knowledge in literature to understand the disease mechanism and the related biological functions. We have developed a novel and comprehensive knowledge discovery framework, COVID-KG to extract fine-grained multimedia knowledge elements (entities, relations and events) from scientific literature. We then exploit the constructed multimedia knowledge graphs (KGs) for question answering and report generation, using drug repurposing as a case study. Our framework also provides detailed contextual sentences, subfigures, and knowledge subgraphs as evidence. All of the data, KGs, reports.
|
https://aclanthology.org/2021.naacl-demos.8
|
https://aclanthology.org/2021.naacl-demos.8.pdf
|
NAACL 2021
|
|||
Multifaceted Domain-Specific Document Embeddings
|
Julian Risch, Philipp Hager, Ralf Krestel
|
{'url': ['https://hpi.de/naumann/s/multifaceted-embeddings', 'https://youtu.be/HHcsX2clEwg'], '#text': 'Current document embeddings require large training corpora but fail to learn high-quality representations when confronted with a small number of domain-specific documents and rare terms. Further, they transform each document into a single embedding vector, making it hard to capture different notions of document similarity or explain why two documents are considered similar. In this work, we propose our Faceted Domain Encoder, a novel approach to learn multifaceted embeddings for domain-specific documents. It is based on a Siamese neural network architecture and leverages knowledge graphs to further enhance the embeddings even if only a few training samples are available. The model identifies different types of domain knowledge and encodes them into separate dimensions of the embedding, thereby enabling multiple ways of finding and comparing related documents in the vector space. We evaluate our approach on two benchmark datasets and find that it achieves the same embedding quality as state-of-the-art models while requiring only a tiny fraction of their training data. An interactive demo, our source code, and the evaluation datasets are available online: and a screencast is available on YouTube:'}
|
https://aclanthology.org/2021.naacl-demos.9
|
https://aclanthology.org/2021.naacl-demos.9.pdf
|
NAACL 2021
|
|||
Improving Evidence Retrieval for Automated Explainable Fact-Checking
|
Chris Samarinas, Wynne Hsu, Mong Li Lee
|
Automated fact-checking on a large-scale is a challenging task that has not been studied systematically until recently. Large noisy document collections like the web or news articles make the task more difficult. We describe a three-stage automated fact-checking system, named Quin+, using evidence retrieval and selection methods. We demonstrate that using dense passage representations leads to much higher evidence recall in a noisy setting. We also propose two sentence selection approaches, an embedding-based selection using a dense retrieval model, and a sequence labeling approach for context-aware selection. Quin+ is able to verify open-domain claims using results from web search engines.
|
https://aclanthology.org/2021.naacl-demos.10
|
https://aclanthology.org/2021.naacl-demos.10.pdf
|
NAACL 2021
|
|||
Interactive Plot Manipulation using Natural Language
|
Yihan Wang, Yutong Shao, Ndapa Nakashole
|
We present an interactive Plotting Agent, a system that enables users to directly manipulate plots using natural language instructions within an interactive programming environment. The Plotting Agent maps language to plot updates. We formulate this problem as a slot-based task-oriented dialog problem, which we tackle with a sequence-to-sequence model. This plotting model while accurate in most cases, still makes errors, therefore, the system allows a feedback mode, wherein the user is presented with a top-k list of plots, among which the user can pick the desired one. From this kind of feedback, we can then, in principle, continuously learn and improve the system. Given that plotting is widely used across data-driven fields, we believe our demonstration will be of interest to both practitioners such as data scientists broadly defined, and researchers interested in natural language interfaces.
|
https://aclanthology.org/2021.naacl-demos.11
|
https://aclanthology.org/2021.naacl-demos.11.pdf
|
NAACL 2021
|
|||
ActiveAnno: General-Purpose Document-Level Annotation Tool with Active Learning Integration
|
Max Wiechmann, Seid Muhie Yimam, Chris Biemann
|
ActiveAnno is an annotation tool focused on document-level annotation tasks developed both for industry and research settings. It is designed to be a general-purpose tool with a wide variety of use cases. It features a modern and responsive web UI for creating annotation projects, conducting annotations, adjudicating disagreements, and analyzing annotation results. ActiveAnno embeds a highly configurable and interactive user interface. The tool also integrates a RESTful API that enables integration into other software systems, including an API for machine learning integration. ActiveAnno is built with extensible design and easy deployment in mind, all to enable users to perform annotation tasks with high efficiency and high-quality annotation results.
|
https://aclanthology.org/2021.naacl-demos.12
|
https://aclanthology.org/2021.naacl-demos.12.pdf
|
NAACL 2021
|
|||
TextEssence: A Tool for Interactive Analysis of Semantic Shifts Between Corpora
|
Denis Newman-Griffis, Venkatesh Sivaraman, Adam Perer, Eric Fosler-Lussier, Harry Hochheiser
|
{'url': 'https://textessence.github.io', '#text': 'Embeddings of words and concepts capture syntactic and semantic regularities of language; however, they have seen limited use as tools to study characteristics of different corpora and how they relate to one another. We introduce TextEssence, an interactive system designed to enable comparative analysis of corpora using embeddings. TextEssence includes visual, neighbor-based, and similarity-based modes of embedding analysis in a lightweight, web-based interface. We further propose a new measure of embedding confidence based on nearest neighborhood overlap, to assist in identifying high-quality embeddings for corpus analysis. A case study on COVID-19 scientific literature illustrates the utility of the system. TextEssence can be found at .'}
|
https://aclanthology.org/2021.naacl-demos.13
|
https://aclanthology.org/2021.naacl-demos.13.pdf
|
NAACL 2021
|
|||
Supporting Spanish Writers using Automated Feedback
|
Aoife Cahill, James Bruno, James Ramey, Gilmar Ayala Meneses, Ian Blood, Florencia Tolentino, Tamar Lavee, Slava Andreyev
|
We present a tool that provides automated feedback to students studying Spanish writing. The feedback is given for four categories: topic development, coherence, writing conventions, and essay organization. The tool is made freely available via a Google Docs add-on. A small user study with third-level students in Mexico shows that students found the tool generally helpful and that most of them plan to continue using it as they work to improve their writing skills.
|
https://aclanthology.org/2021.naacl-demos.14
|
https://aclanthology.org/2021.naacl-demos.14.pdf
|
NAACL 2021
|
|||
Alexa Conversations: An Extensible Data-driven Approach for Building Task-oriented Dialogue Systems
|
Anish Acharya, Suranjit Adhikari, Sanchit Agarwal, Vincent Auvray, Nehal Belgamwar, Arijit Biswas, Shubhra Chandra, Tagyoung Chung, Maryam Fazel-Zarandi, Raefer Gabriel, Shuyang Gao, Rahul Goel, Dilek Hakkani-Tur, Jan Jezabek, Abhay Jha, Jiun-Yu Kao, Prakash Krishnan, Peter Ku, Anuj Goyal, Chien-Wei Lin, Qing Liu, Arindam Mandal, Angeliki Metallinou, Vishal Naik, Yi Pan, Shachi Paul, Vittorio Perera, Abhishek Sethi, Minmin Shen, Nikko Strom, Eddie Wang
|
Traditional goal-oriented dialogue systems rely on various components such as natural language understanding, dialogue state tracking, policy learning and response generation. Training each component requires annotations which are hard to obtain for every new domain, limiting scalability of such systems. Similarly, rule-based dialogue systems require extensive writing and maintenance of rules and do not scale either. End-to-End dialogue systems, on the other hand, do not require module-specific annotations but need a large amount of data for training. To overcome these problems, in this demo, we present Alexa Conversations, a new approach for building goal-oriented dialogue systems that is scalable, extensible as well as data efficient. The components of this system are trained in a data-driven manner, but instead of collecting annotated conversations for training, we generate them using a novel dialogue simulator based on a few seed dialogues and specifications of APIs and entities provided by the developer. Our approach provides out-of-the-box support for natural conversational phenomenon like entity sharing across turns or users changing their mind during conversation without requiring developers to provide any such dialogue flows. We exemplify our approach using a simple pizza ordering task and showcase its value in reducing the developer burden for creating a robust experience. Finally, we evaluate our system using a typical movie ticket booking task integrated with live APIs and show that the dialogue simulator is an essential component of the system that leads to over 50% improvement in turn-level action signature prediction accuracy.
|
https://aclanthology.org/2021.naacl-demos.15
|
https://aclanthology.org/2021.naacl-demos.15.pdf
|
NAACL 2021
|
|||
RESIN: A Dockerized Schema-Guided Cross-document Cross-lingual Cross-media Information Extraction and Event Tracking System
|
Haoyang Wen, Ying Lin, Tuan Lai, Xiaoman Pan, Sha Li, Xudong Lin, Ben Zhou, Manling Li, Haoyu Wang, Hongming Zhang, Xiaodong Yu, Alexander Dong, Zhenhailong Wang, Yi Fung, Piyush Mishra, Qing Lyu, Dídac Surís, Brian Chen, Susan Windisch Brown, Martha Palmer, Chris Callison-Burch, Carl Vondrick, Jiawei Han, Dan Roth, Shih-Fu Chang, Heng Ji
|
We present a new information extraction system that can automatically construct temporal event graphs from a collection of news documents from multiple sources, multiple languages (English and Spanish for our experiment), and multiple data modalities (speech, text, image and video). The system advances state-of-the-art from two aspects: (1) extending from sentence-level event extraction to cross-document cross-lingual cross-media event extraction, coreference resolution and temporal event tracking; (2) using human curated event schema library to match and enhance the extraction output. We have made the dockerlized system publicly available for research purpose at GitHub, with a demo video.
|
https://aclanthology.org/2021.naacl-demos.16
|
https://aclanthology.org/2021.naacl-demos.16.pdf
|
NAACL 2021
|
|||
MUDES: Multilingual Detection of Offensive Spans
|
Tharindu Ranasinghe, Marcos Zampieri
|
The interest in offensive content identification in social media has grown substantially in recent years. Previous work has dealt mostly with post level annotations. However, identifying offensive spans is useful in many ways. To help coping with this important challenge, we present MUDES, a multilingual system to detect offensive spans in texts. MUDES features pre-trained models, a Python API for developers, and a user-friendly web-based interface. A detailed description of MUDES’ components is presented in this paper.
|
https://aclanthology.org/2021.naacl-demos.17
|
https://aclanthology.org/2021.naacl-demos.17.pdf
|
NAACL 2021
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.