title
stringlengths 27
143
| authors
stringlengths 9
264
| abstract
stringlengths 0
1.86k
| url
stringlengths 46
48
| detail_url
stringclasses 1
value | abs
stringclasses 1
value | OpenReview
stringclasses 1
value | Download PDF
stringlengths 50
52
| tags
stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|
Beyond Glass-Box Features: Uncertainty Quantification Enhanced Quality Estimation for Neural Machine Translation
|
Ke Wang, Yangbin Shi, Jiayi Wang, Yuqi Zhang, Yu Zhao, Xiaolin Zheng
|
Quality Estimation (QE) plays an essential role in applications of Machine Translation (MT). Traditionally, a QE system accepts the original source text and translation from a black-box MT system as input. Recently, a few studies indicate that as a by-product of translation, QE benefits from the model and training data’s information of the MT system where the translations come from, and it is called the “glass-box QE”. In this paper, we extend the definition of “glass-box QE” generally to uncertainty quantification with both “black-box” and “glass-box” approaches and design several features deduced from them to blaze a new trial in improving QE’s performance. We propose a framework to fuse the feature engineering of uncertainty quantification into a pre-trained cross-lingual language model to predict the translation quality. Experiment results show that our method achieves state-of-the-art performances on the datasets of WMT 2020 QE shared task.
|
https://aclanthology.org/2021.findings-emnlp.401
|
https://aclanthology.org/2021.findings-emnlp.401.pdf
|
EMNLP 2021
|
|||
Fight Fire with Fire: Fine-tuning Hate Detectors using Large Samples of Generated Hate Speech
|
Tomer Wullach, Amir Adler, Einat Minkov
|
Automatic hate speech detection is hampered by the scarcity of labeled datasetd, leading to poor generalization. We employ pretrained language models (LMs) to alleviate this data bottleneck. We utilize the GPT LM for generating large amounts of synthetic hate speech sequences from available labeled examples, and leverage the generated data in fine-tuning large pretrained LMs on hate detection. An empirical study using the models of BERT, RoBERTa and ALBERT, shows that this approach improves generalization significantly and consistently within and across data distributions. In fact, we find that generating relevant labeled hate speech sequences is preferable to using out-of-domain, and sometimes also within-domain, human-labeled examples.
|
https://aclanthology.org/2021.findings-emnlp.402
|
https://aclanthology.org/2021.findings-emnlp.402.pdf
|
EMNLP 2021
|
|||
AutoEQA: Auto-Encoding Questions for Extractive Question Answering
|
Stalin Varanasi, Saadullah Amin, Guenter Neumann
|
There has been a significant progress in the field of Extractive Question Answering (EQA) in the recent years. However, most of them are reliant on annotations of answer-spans in the corresponding passages. In this work, we address the problem of EQA when no annotations are present for the answer span, i.e., when the dataset contains only questions and corresponding passages. Our method is based on auto-encoding of the question that performs a question answering task during encoding and a question generation task during decoding. We show that our method performs well in a zero-shot setting and can provide an additional loss to boost performance for EQA.
|
https://aclanthology.org/2021.findings-emnlp.403
|
https://aclanthology.org/2021.findings-emnlp.403.pdf
|
EMNLP 2021
|
|||
A Multi-label Multi-hop Relation Detection Model based on Relation-aware Sequence Generation
|
Linhai Zhang, Deyu Zhou, Chao Lin, Yulan He
|
Multi-hop relation detection in Knowledge Base Question Answering (KBQA) aims at retrieving the relation path starting from the topic entity to the answer node based on a given question, where the relation path may comprise multiple relations. Most of the existing methods treat it as a single-label learning problem while ignoring the fact that for some complex questions, there exist multiple correct relation paths in knowledge bases. Therefore, in this paper, multi-hop relation detection is considered as a multi-label learning problem. However, performing multi-label multi-hop relation detection is challenging since the numbers of both the labels and the hops are unknown. To tackle this challenge, multi-label multi-hop relation detection is formulated as a sequence generation task. A relation-aware sequence relation generation model is proposed to solve the problem in an end-to-end manner. Experimental results show the effectiveness of the proposed method for relation detection and KBQA.
|
https://aclanthology.org/2021.findings-emnlp.404
|
https://aclanthology.org/2021.findings-emnlp.404.pdf
|
EMNLP 2021
|
|||
Don’t Discard All the Biased Instances: Investigating a Core Assumption in Dataset Bias Mitigation Techniques
|
Hossein Amirkhani, Mohammad Taher Pilehvar
|
Existing techniques for mitigating dataset bias often leverage a biased model to identify biased instances. The role of these biased instances is then reduced during the training of the main model to enhance its robustness to out-of-distribution data. A common core assumption of these techniques is that the main model handles biased instances similarly to the biased model, in that it will resort to biases whenever available. In this paper, we show that this assumption does not hold in general. We carry out a critical investigation on two well-known datasets in the domain, MNLI and FEVER, along with two biased instance detection methods, partial-input and limited-capacity models. Our experiments show that in around a third to a half of instances, the biased model is unable to predict the main model’s behavior, highlighted by the significantly different parts of the input on which they base their decisions. Based on a manual validation, we also show that this estimate is highly in line with human interpretation. Our findings suggest that down-weighting of instances detected by bias detection methods, which is a widely-practiced procedure, is an unnecessary waste of training data. We release our code to facilitate reproducibility and future research.
|
https://aclanthology.org/2021.findings-emnlp.405
|
https://aclanthology.org/2021.findings-emnlp.405.pdf
|
EMNLP 2021
|
|||
Stacked AMR Parsing with Silver Data
|
Qingrong Xia, Zhenghua Li, Rui Wang, Min Zhang
|
Lacking sufficient human-annotated data is one main challenge for abstract meaning representation (AMR) parsing. To alleviate this problem, previous works usually make use of silver data or pre-trained language models. In particular, one recent seq-to-seq work directly fine-tunes AMR graph sequences on the encoder-decoder pre-trained language model and achieves new state-of-the-art results, outperforming previous works by a large margin. However, it makes the decoding relatively slower. In this work, we investigate alternative approaches to achieve competitive performance at faster speeds. We propose a simplified AMR parser and a pre-training technique for the effective usage of silver data. We conduct extensive experiments on the widely used AMR2.0 dataset and the results demonstrate that our Transformer-based AMR parser achieves the best performance among the seq2graph-based models. Furthermore, with silver data, our model achieves competitive results with the SOTA model, and the speed is an order of magnitude faster. Detailed analyses are conducted to gain more insights into our proposed model and the effectiveness of the pre-training technique.
|
https://aclanthology.org/2021.findings-emnlp.406
|
https://aclanthology.org/2021.findings-emnlp.406.pdf
|
EMNLP 2021
|
|||
Speculative Sampling in Variational Autoencoders for Dialogue Response Generation
|
Shoetsu Sato, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa
|
Variational autoencoders have been studied as a promising approach to model one-to-many mappings from context to response in chat response generation. However, they often fail to learn proper mappings. One of the reasons for this failure is the discrepancy between a response and a latent variable sampled from an approximated distribution in training. Inappropriately sampled latent variables hinder models from constructing a modulated latent space. As a result, the models stop handling uncertainty in conversations. To resolve that, we propose speculative sampling of latent variables. Our method chooses the most probable one from redundantly sampled latent variables for tying up the variable with a given response. We confirm the efficacy of our method in response generation with massive dialogue data constructed from Twitter posts.
|
https://aclanthology.org/2021.findings-emnlp.407
|
https://aclanthology.org/2021.findings-emnlp.407.pdf
|
EMNLP 2021
|
|||
Perceived and Intended Sarcasm Detection with Graph Attention Networks
|
Joan Plepi, Lucie Flek
|
Existing sarcasm detection systems focus on exploiting linguistic markers, context, or user-level priors. However, social studies suggest that the relationship between the author and the audience can be equally relevant for the sarcasm usage and interpretation. In this work, we propose a framework jointly leveraging (1) a user context from their historical tweets together with (2) the social information from a user’s neighborhood in an interaction graph, to contextualize the interpretation of the post. We distinguish between perceived and self-reported sarcasm identification. We use graph attention networks (GAT) over users and tweets in a conversation thread, combined with various dense user history representations. Apart from achieving state-of-the-art results on the recently published dataset of 19k Twitter users with 30K labeled tweets, adding 10M unlabeled tweets as context, our experiments indicate that the graph network contributes to interpreting the sarcastic intentions of the author more than to predicting the sarcasm perception by others.
|
https://aclanthology.org/2021.findings-emnlp.408
|
https://aclanthology.org/2021.findings-emnlp.408.pdf
|
EMNLP 2021
|
|||
Contrastive Representation Learning for Exemplar-Guided Paraphrase Generation
|
Haoran Yang, Wai Lam, Piji Li
|
Exemplar-Guided Paraphrase Generation (EGPG) aims to generate a target sentence which conforms to the style of the given exemplar while encapsulating the content information of the source sentence. In this paper, we propose a new method with the goal of learning a better representation of the style and the content. This method is mainly motivated by the recent success of contrastive learning which has demonstrated its power in unsupervised feature extraction tasks. The idea is to design two contrastive losses with respect to the content and the style by considering two problem characteristics during training. One characteristic is that the target sentence shares the same content with the source sentence, and the second characteristic is that the target sentence shares the same style with the exemplar. These two contrastive losses are incorporated into the general encoder-decoder paradigm. Experiments on two datasets, namely QQP-Pos and ParaNMT, demonstrate the effectiveness of our proposed constrastive losses.
|
https://aclanthology.org/2021.findings-emnlp.409
|
https://aclanthology.org/2021.findings-emnlp.409.pdf
|
EMNLP 2021
|
|||
MAD-G: Multilingual Adapter Generation for Efficient Cross-Lingual Transfer
|
Alan Ansell, Edoardo Maria Ponti, Jonas Pfeiffer, Sebastian Ruder, Goran Glavaš, Ivan Vulić, Anna Korhonen
|
Adapter modules have emerged as a general parameter-efficient means to specialize a pretrained encoder to new domains. Massively multilingual transformers (MMTs) have particularly benefited from additional training of language-specific adapters. However, this approach is not viable for the vast majority of languages, due to limitations in their corpus size or compute budgets. In this work, we propose MAD-G (Multilingual ADapter Generation), which contextually generates language adapters from language representations based on typological features. In contrast to prior work, our time- and space-efficient MAD-G approach enables (1) sharing of linguistic knowledge across languages and (2) zero-shot inference by generating language adapters for unseen languages. We thoroughly evaluate MAD-G in zero-shot cross-lingual transfer on part-of-speech tagging, dependency parsing, and named entity recognition. While offering (1) improved fine-tuning efficiency (by a factor of around 50 in our experiments), (2) a smaller parameter budget, and (3) increased language coverage, MAD-G remains competitive with more expensive methods for language-specific adapter training across the board. Moreover, it offers substantial benefits for low-resource languages, particularly on the NER task in low-resource African languages. Finally, we demonstrate that MAD-G’s transfer performance can be further improved via: (i) multi-source training, i.e., by generating and combining adapters of multiple languages with available task-specific training data; and (ii) by further fine-tuning generated MAD-G adapters for languages with monolingual data.
|
https://aclanthology.org/2021.findings-emnlp.410
|
https://aclanthology.org/2021.findings-emnlp.410.pdf
|
EMNLP 2021
|
|||
Sustainable Modular Debiasing of Language Models
|
Anne Lauscher, Tobias Lueken, Goran Glavaš
|
Unfair stereotypical biases (e.g., gender, racial, or religious biases) encoded in modern pretrained language models (PLMs) have negative ethical implications for widespread adoption of state-of-the-art language technology. To remedy for this, a wide range of debiasing techniques have recently been introduced to remove such stereotypical biases from PLMs. Existing debiasing methods, however, directly modify all of the PLMs parameters, which – besides being computationally expensive – comes with the inherent risk of (catastrophic) forgetting of useful language knowledge acquired in pretraining. In this work, we propose a more sustainable modular debiasing approach based on dedicated debiasing adapters, dubbed ADELE. Concretely, we (1) inject adapter modules into the original PLM layers and (2) update only the adapters (i.e., we keep the original PLM parameters frozen) via language modeling training on a counterfactually augmented corpus. We showcase ADELE, in gender debiasing of BERT: our extensive evaluation, encompassing three intrinsic and two extrinsic bias measures, renders ADELE, very effective in bias mitigation. We further show that – due to its modular nature – ADELE, coupled with task adapters, retains fairness even after large-scale downstream training. Finally, by means of multilingual BERT, we successfully transfer ADELE, to six target languages.
|
https://aclanthology.org/2021.findings-emnlp.411
|
https://aclanthology.org/2021.findings-emnlp.411.pdf
|
EMNLP 2021
|
|||
A Divide-And-Conquer Approach for Multi-label Multi-hop Relation Detection in Knowledge Base Question Answering
|
Deyu Zhou, Yanzheng Xiang, Linhai Zhang, Chenchen Ye, Qian-Wen Zhang, Yunbo Cao
|
Relation detection in knowledge base question answering, aims to identify the path(s) of relations starting from the topic entity node that is linked to the answer node in knowledge graph. Such path might consist of multiple relations, which we call multi-hop. Moreover, for a single question, there may exist multiple relation paths to the correct answer, which we call multi-label. However, most of existing approaches only detect one single path to obtain the answer without considering other correct paths, which might affect the final performance. Therefore, in this paper, we propose a novel divide-and-conquer approach for multi-label multi-hop relation detection (DC-MLMH) by decomposing it into head relation detection and conditional relation path generation. In specific, a novel path sampling mechanism is proposed to generate diverse relation paths for the inference stage. A majority-vote policy is employed to detect final KB answer. Comprehensive experiments were conducted on the FreebaseQA benchmark dataset. Experimental results show that the proposed approach not only outperforms other competitive multi-label baselines, but also has superiority over some state-of-art KBQA methods.
|
https://aclanthology.org/2021.findings-emnlp.412
|
https://aclanthology.org/2021.findings-emnlp.412.pdf
|
EMNLP 2021
|
|||
Counterfactual Adversarial Learning with Representation Interpolation
|
Wei Wang, Boxin Wang, Ning Shi, Jinfeng Li, Bingyu Zhu, Xiangyu Liu, Rong Zhang
|
Deep learning models exhibit a preference for statistical fitting over logical reasoning. Spurious correlations might be memorized when there exists statistical bias in training data, which severely limits the model performance especially in small data scenarios. In this work, we introduce Counterfactual Adversarial Training framework (CAT) to tackle the problem from a causality perspective. Particularly, for a specific sample, CAT first generates a counterfactual representation through latent space interpolation in an adversarial manner, and then performs Counterfactual Risk Minimization (CRM) on each original-counterfactual pair to adjust sample-wise loss weight dynamically, which encourages the model to explore the true causal effect. Extensive experiments demonstrate that CAT achieves substantial performance improvement over SOTA across different downstream tasks, including sentence classification, natural language inference and question answering.
|
https://aclanthology.org/2021.findings-emnlp.413
|
https://aclanthology.org/2021.findings-emnlp.413.pdf
|
EMNLP 2021
|
|||
‘Just What do You Think You’re Doing, Dave?’ A Checklist for Responsible Data Use in NLP
|
Anna Rogers, Timothy Baldwin, Kobi Leins
|
A key part of the NLP ethics movement is responsible use of data, but exactly what that means or how it can be best achieved remain unclear. This position paper discusses the core legal and ethical principles for collection and sharing of textual data, and the tensions between them. We propose a potential checklist for responsible data (re-)use that could both standardise the peer review of conference submissions, as well as enable a more in-depth view of published research across the community. Our proposal aims to contribute to the development of a consistent standard for data (re-)use, embraced across NLP conferences.
|
https://aclanthology.org/2021.findings-emnlp.414
|
https://aclanthology.org/2021.findings-emnlp.414.pdf
|
EMNLP 2021
|
|||
Counter-Contrastive Learning for Language GANs
|
Yekun Chai, Haidong Zhang, Qiyue Yin, Junge Zhang
|
Generative Adversarial Networks (GANs) have achieved great success in image synthesis, but have proven to be difficult to generate natural language. Challenges arise from the uninformative learning signals passed from the discriminator. In other words, the poor learning signals limit the learning capacity for generating languages with rich structures and semantics. In this paper, we propose to adopt the counter-contrastive learning (CCL) method to support the generator’s training in language GANs. In contrast to standard GANs that adopt a simple binary classifier to discriminate whether a sample is real or fake, we employ a counter-contrastive learning signal that advances the training of language synthesizers by (1) pulling the language representations of generated and real samples together and (2) pushing apart representations of real samples to compete with the discriminator and thus prevent the discriminator from being overtrained. We evaluate our method on both synthetic and real benchmarks and yield competitive performance compared to previous GANs for adversarial sequence generation.
|
https://aclanthology.org/2021.findings-emnlp.415
|
https://aclanthology.org/2021.findings-emnlp.415.pdf
|
EMNLP 2021
|
|||
Incorporating Circumstances into Narrative Event Prediction
|
Shichao Wang, Xiangrui Cai, HongBin Wang, Xiaojie Yuan
|
The narrative event prediction aims to predict what happens after a sequence of events, which is essential to modeling sophisticated real-world events. Existing studies focus on mining the inter-events relationships while ignoring how the events happened, which we called circumstances. With our observation, the event circumstances indicate what will happen next. To incorporate event circumstances into the narrative event prediction, we propose the CircEvent, which adopts the two multi-head attention to retrieve circumstances at the local and global levels. We also introduce a regularization of attention weights to leverage the alignment between events and local circumstances. The experimental results demonstrate our CircEvent outperforms existing baselines by 12.2%. The further analysis demonstrates the effectiveness of our multi-head attention modules and regularization.
|
https://aclanthology.org/2021.findings-emnlp.416
|
https://aclanthology.org/2021.findings-emnlp.416.pdf
|
EMNLP 2021
|
|||
MultiFix: Learning to Repair Multiple Errors by Optimal Alignment Learning
|
HyeonTae Seo, Yo-Sub Han, Sang-Ki Ko
|
We consider the problem of learning to repair erroneous C programs by learning optimal alignments with correct programs. Since the previous approaches fix a single error in a line, it is inevitable to iterate the fixing process until no errors remain. In this work, we propose a novel sequence-to-sequence learning framework for fixing multiple program errors at a time. We introduce the edit-distance-based data labeling approach for program error correction. Instead of labeling a program repair example by pairing an erroneous program with a line fix, we label the example by paring an erroneous program with an optimal alignment to the corresponding correct program produced by the edit-distance computation. We evaluate our proposed approach on a publicly available dataset (DeepFix dataset) that consists of erroneous C programs submitted by novice programming students. On a set of 6,975 erroneous C programs from the DeepFix dataset, our approach achieves the state-of-the-art result in terms of full repair rate on the DeepFix dataset (without extra data such as compiler error message or additional source codes for pre-training).
|
https://aclanthology.org/2021.findings-emnlp.417
|
https://aclanthology.org/2021.findings-emnlp.417.pdf
|
EMNLP 2021
|
|||
HOTTER: Hierarchical Optimal Topic Transport with Explanatory Context Representations
|
Sabine Wehnert, Christian Scheel, Simona Szakács-Behling, Maret Nieländer, Patrick Mielke, Ernesto William De Luca
|
Natural language processing (NLP) is often the backbone of today’s systems for user interactions, information retrieval and others. Many of such NLP applications rely on specialized learned representations (e.g. neural word embeddings, topic models) that improve the ability to reason about the relationships between documents of a corpus. Paired with the progress in learned representations, the similarity metrics used to compare representations of documents are also evolving, with numerous proposals differing in computation time or interpretability. In this paper we propose an extension to a specific emerging hybrid document distance metric which combines topic models and word embeddings: the Hierarchical Optimal Topic Transport (HOTT). In specific, we extend HOTT by using context-enhanced word representations. We provide a validation of our approach on public datasets, using the language model BERT for a document categorization task. Results indicate competitive performance of the extended HOTT metric. We furthermore apply the HOTT metric and its extension to support educational media research, with a retrieval task of matching topics in German curricula to educational textbooks passages, along with offering an auxiliary explanatory document representing the dominant topic of the retrieved document. In a user study, our explanation method is preferred over regular topic keywords.
|
https://aclanthology.org/2021.findings-emnlp.418
|
https://aclanthology.org/2021.findings-emnlp.418.pdf
|
EMNLP 2021
|
|||
Grammatical Error Correction with Contrastive Learning in Low Error Density Domains
|
Hannan Cao, Wenmian Yang, Hwee Tou Ng
|
Although grammatical error correction (GEC) has achieved good performance on texts written by learners of English as a second language, performance on low error density domains where texts are written by English speakers of varying levels of proficiency can still be improved. In this paper, we propose a contrastive learning approach to encourage the GEC model to assign a higher probability to a correct sentence while reducing the probability of incorrect sentences that the model tends to generate, so as to improve the accuracy of the model. Experimental results show that our approach significantly improves the performance of GEC models in low error density domains, when evaluated on the benchmark CWEB dataset.
|
https://aclanthology.org/2021.findings-emnlp.419
|
https://aclanthology.org/2021.findings-emnlp.419.pdf
|
EMNLP 2021
|
|||
Improving Unsupervised Commonsense Reasoning Using Knowledge-Enabled Natural Language Inference
|
Canming Huang, Weinan He, Yongmei Liu
|
Recent methods based on pre-trained language models have shown strong supervised performance on commonsense reasoning. However, they rely on expensive data annotation and time-consuming training. Thus, we focus on unsupervised commonsense reasoning. We show the effectiveness of using a common framework, Natural Language Inference (NLI), to solve diverse commonsense reasoning tasks. By leveraging transfer learning from large NLI datasets, and injecting crucial knowledge from commonsense sources such as ATOMIC 2020 and ConceptNet, our method achieved state-of-the-art unsupervised performance on two commonsense reasoning tasks: WinoWhy and CommonsenseQA. Further analysis demonstrated the benefits of multiple categories of knowledge, but problems about quantities and antonyms are still challenging.
|
https://aclanthology.org/2021.findings-emnlp.420
|
https://aclanthology.org/2021.findings-emnlp.420.pdf
|
EMNLP 2021
|
|||
Does Putting a Linguist in the Loop Improve NLU Data Collection?
|
Alicia Parrish, William Huang, Omar Agha, Soo-Hwan Lee, Nikita Nangia, Alexia Warstadt, Karmanya Aggarwal, Emily Allaway, Tal Linzen, Samuel R. Bowman
|
Many crowdsourced NLP datasets contain systematic artifacts that are identified only after data collection is complete. Earlier identification of these issues should make it easier to create high-quality training and evaluation data. We attempt this by evaluating protocols in which expert linguists work ‘in the loop’ during data collection to identify and address these issues by adjusting task instructions and incentives. Using natural language inference as a test case, we compare three data collection protocols: (i) a baseline protocol with no linguist involvement, (ii) a linguist-in-the-loop intervention with iteratively-updated constraints on the writing task, and (iii) an extension that adds direct interaction between linguists and crowdworkers via a chatroom. We find that linguist involvement does not lead to increased accuracy on out-of-domain test sets compared to baseline, and adding a chatroom has no effect on the data. Linguist involvement does, however, lead to more challenging evaluation data and higher accuracy on some challenge sets, demonstrating the benefits of integrating expert analysis during data collection.
|
https://aclanthology.org/2021.findings-emnlp.421
|
https://aclanthology.org/2021.findings-emnlp.421.pdf
|
EMNLP 2021
|
|||
Tiered Reasoning for Intuitive Physics: Toward Verifiable Commonsense Language Understanding
|
Shane Storks, Qiaozi Gao, Yichi Zhang, Joyce Chai
|
Large-scale, pre-trained language models (LMs) have achieved human-level performance on a breadth of language understanding tasks. However, evaluations only based on end task performance shed little light on machines’ true ability in language understanding and reasoning. In this paper, we highlight the importance of evaluating the underlying reasoning process in addition to end performance. Toward this goal, we introduce Tiered Reasoning for Intuitive Physics (TRIP), a novel commonsense reasoning dataset with dense annotations that enable multi-tiered evaluation of machines’ reasoning process. Our empirical results show that while large LMs can achieve high end performance, they struggle to support their predictions with valid supporting evidence. The TRIP dataset and our baseline results will motivate verifiable evaluation of commonsense reasoning and facilitate future research toward developing better language understanding and reasoning models.
|
https://aclanthology.org/2021.findings-emnlp.422
|
https://aclanthology.org/2021.findings-emnlp.422.pdf
|
EMNLP 2021
|
|||
Making Heads and Tails of Models with Marginal Calibration for Sparse Tagsets
|
Michael Kranzlein, Nelson F. Liu, Nathan Schneider
|
For interpreting the behavior of a probabilistic model, it is useful to measure a model’s calibration—the extent to which it produces reliable confidence scores. We address the open problem of calibration for tagging models with sparse tagsets, and recommend strategies to measure and reduce calibration error (CE) in such models. We show that several post-hoc recalibration techniques all reduce calibration error across the marginal distribution for two existing sequence taggers. Moreover, we propose tag frequency grouping (TFG) as a way to measure calibration error in different frequency bands. Further, recalibrating each group separately promotes a more equitable reduction of calibration error across the tag frequency spectrum.
|
https://aclanthology.org/2021.findings-emnlp.423
|
https://aclanthology.org/2021.findings-emnlp.423.pdf
|
EMNLP 2021
|
|||
GeDi: Generative Discriminator Guided Sequence Generation
|
Ben Krause, Akhilesh Deepak Gotmare, Bryan McCann, Nitish Shirish Keskar, Shafiq Joty, Richard Socher, Nazneen Fatema Rajani
|
https://aclanthology.org/2021.findings-emnlp.424
|
https://aclanthology.org/2021.findings-emnlp.424.pdf
|
EMNLP 2021
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.