default search action
1st SustaiNLP@EMNLP 2020: Virtual
- Nafise Sadat Moosavi, Angela Fan, Vered Shwartz, Goran Glavas, Shafiq R. Joty, Alex Wang, Thomas Wolf:
Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing, SustaiNLP@EMNLP 2020, Online, November 20, 2020. Association for Computational Linguistics 2020, ISBN 978-1-952148-77-4 - Ali Akbar Septiandri, Yosef Ardhito Winatmoko, Ilham Firdausi Putra:
Knowing Right from Wrong: Should We Use More Complex Models for Automatic Short-Answer Scoring in Bahasa Indonesia? 1-7 - Urmish Thakker, Jesse G. Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina:
Rank and run-time aware compression of NLP Applications. 8-18 - Harshil Shah, Julien Fauqueur:
Learning Informative Representations of Biomedical Relations with Latent Variable Models. 19-28 - Kumar Shridhar, Harshil Jain, Akshat Agarwal, Denis Kleyko:
End to End Binarized Neural Networks for Text Classification. 29-34 - Moshe Wasserblat, Oren Pereg, Peter Izsak:
Exploring the Boundaries of Low-Resource BERT Distillation. 35-40 - Sosuke Kobayashi, Sho Yokoi, Jun Suzuki, Kentaro Inui:
Efficient Estimation of Influence of a Training Instance. 41-47 - Yi-Te Hsu, Sarthak Garg, Yi-Hsiu Liao, Ilya Chatsviorkin:
Efficient Inference For Neural Machine Translation. 48-53 - Alicia Y. Tsai, Laurent El Ghaoui:
Sparse Optimization for Unsupervised Extractive Summarization of Long Documents with the Frank-Wolfe Algorithm. 54-62 - Yuxiang Wu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel:
Don't Read Too Much Into It: Adaptive Computation for Open-Domain Question Answering. 63-72 - Cennet Oguz, Ngoc Thang Vu:
A Two-stage Model for Slot Filling in Low-resource Settings: Domain-agnostic Non-slot Reduction and Pretrained Contextual Embeddings. 73-82 - Ji Xin, Rodrigo Frassetto Nogueira, Yaoliang Yu, Jimmy Lin:
Early Exiting BERT for Efficient Document Ranking. 83-88 - Giuseppe Lancioni, Saida S. Mohamed, Beatrice Portelli, Giuseppe Serra, Carlo Tasso:
Keyphrase Generation with GANs in Low-Resources Scenarios. 89-96 - Norbert Kis-Szabó, Gábor Berend:
Quasi-Multitask Learning: an Efficient Surrogate for Obtaining Model Ensembles. 97-106 - Xinyu Zhang, Andrew Yates, Jimmy Lin:
A Little Bit Is Worse Than None: Ranking with Limited Training Data. 107-112 - Parul Awasthy, Bishwaranjan Bhattacharjee, John R. Kender, Radu Florian:
Predictive Model Selection for Transfer Learning in Sequence Labeling Tasks. 113-118 - Amine Abdaoui, Camille Pradel, Grégoire Sigel:
Load What You Need: Smaller Versions of Mutililingual BERT. 119-123 - Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, Kurt Keutzer:
SqueezeBERT: What can computer vision teach NLP about efficient neural networks? 124-135 - Raj Ratn Pranesh, Ambesh Shekhar:
Analysis of Resource-efficient Predictive Models for Natural Language Processing. 136-140 - Qingqing Cao, Aruna Balasubramanian, Niranjan Balasubramanian:
Towards Accurate and Reliable Energy Measurement of NLP Models. 141-148 - Young Jin Kim, Hany Hassan:
FastFormers: Highly Efficient Transformer Models for Natural Language Understanding. 149-158 - Ariadna Quattoni, Xavier Carreras:
A comparison between CNNs and WFAs for Sequence Classification. 159-163 - Seungtaek Choi, Myeongho Jeong, Jinyoung Yeo, Seung-won Hwang:
Label-Efficient Training for Next Response Selection. 164-168 - Swaroop Mishra, Bhavdeep Singh Sachdeva:
Do We Need to Create Big Datasets to Learn a Task? 169-173 - Alex Wang, Thomas Wolf:
Overview of the SustaiNLP 2020 Shared Task. 174-178
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.