default search action
8th RepL4NLP@ACL 2023: Toronto, Canada
- Burcu Can, Maximilian Mozes, Samuel Cahyawijaya, Naomi Saphra, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Chen Zhao, Isabelle Augenstein, Anna Rogers, Kyunghyun Cho, Edward Grefenstette, Lena Voita:
Proceedings of the 8th Workshop on Representation Learning for NLP, RepL4NLP@ACL 2023, Toronto, Canada, July 13, 2023. Association for Computational Linguistics 2023, ISBN 978-1-959429-77-7 - Frontmatter.
- Ashim Gupta, Amrith Krishna:
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems. 1-12 - Shahriar Golchin, Mihai Surdeanu, Nazgol Tavabi, Ata M. Kiapour:
Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking In-domain Keywords. 13-21 - Vivi Nastase, Paola Merlo:
Grammatical information in BERT sentence embeddings as two-dimensional arrays. 22-39 - Akshay Srinivasan, Sowmya Vajjala:
A Multilingual Evaluation of NER Robustness to Adversarial Inputs. 40-53 - Benfeng Xu, Chunxu Zhao, Wenbin Jiang, Pengfei Zhu, Songtai Dai, Chao Pang, Zhuo Sun, Shuohuan Wang, Yu Sun:
Retrieval-Augmented Domain Adaptation of Language Models. 54-64 - Yiwei Lyu, Tiange Luo, Jiacheng Shi, Todd C. Hollon, Honglak Lee:
Fine-grained Text Style Transfer with Diffusion-Based Language Models. 65-74 - Seungyeon Lee, Minho Lee:
Enhancing text comprehension for Question Answering with Contrastive Learning. 75-86 - Keisuke Shirai, Hirotaka Kameko, Shinsuke Mori:
Towards Flow Graph Prediction of Open-Domain Procedural Texts. 87-96 - Gregor Geigle, Chen Liu, Jonas Pfeiffer, Iryna Gurevych:
One does not fit all! On the Complementarity of Vision Encoders for Vision and Language Tasks. 97-117 - Wenbo Zhao, Arpit Gupta, Tagyoung Chung, Jing Huang:
SPC: Soft Prompt Construction for Cross Domain Generalization. 118-130 - Adrian Kochsiek, Apoorv Saxena, Inderjeet Nair, Rainer Gemulla:
Friendly Neighbors: Contextualized Sequence-to-Sequence Link Prediction. 131-138 - Sneha Singhania, Simon Razniewski, Gerhard Weikum:
Extracting Multi-valued Relations from Language Models. 139-154 - Anni Chen, Bhuwan Dhingra:
Hierarchical Multi-Instance Multi-Label Learning for Detecting Propaganda Techniques. 155-163 - Narutatsu Ri, Fei-Tzin Lee, Nakul Verma:
Contrastive Loss is All You Need to Recover Analogies as Parallel Lines. 164-173 - Alireza Mohammadshahi, James Henderson:
Syntax-Aware Graph-to-Graph Transformer for Semantic Role Labelling. 174-186 - Mahdi Rahimi, Mihai Surdeanu:
Improving Zero-shot Relation Classification via Automatically-acquired Entailment Templates. 187-195 - Vishvak Murahari, Ameet Deshpande, Carlos E. Jimenez, Izhak Shafran, Mingqiu Wang, Yuan Cao, Karthik Narasimhan:
MUX-PLMs: Pre-training Language Models with Data Multiplexing. 196-211 - Robert Gale, Alexandra Salem, Gerasimos Fergadiotis, Steven Bedrick:
Mixed Orthographic/Phonemic Language Modeling: Beyond Orthographically Restricted Transformers (BORT). 212-225 - Stephen Obadinma, Hongyu Guo, Xiaodan Zhu:
Effectiveness of Data Augmentation for Parameter Efficient Tuning with Limited Data. 226-237 - Bin Wang, Haizhou Li:
Relational Sentence Embedding for Flexible Semantic Matching. 238-252 - Likang Xiao, Richong Zhang, Zijie Chen, Junfan Chen:
Tucker Decomposition with Frequency Attention for Temporal Knowledge Graph Completion. 253-265 - Romain Bielawski, Rufin VanRullen:
CLIP-based image captioning via unsupervised cycle-consistency in the latent space. 266-275 - Guangsheng Bao, Zhiyang Teng, Yue Zhang:
Token-level Fitting Issues of Seq2seq Models. 276-288 - David Cheng-Han Chiang, Hung-yi Lee, Yung-Sung Chuang, James R. Glass:
Revealing the Blind Spot of Sentence Encoder Evaluation by HEROS. 289-302 - John B. Harvill, Mark Hasegawa-Johnson, Hee Suk Yoon, Chang D. Yoo, Eunseop Yoon:
One-Shot Exemplification Modeling via Latent Sense Representations. 303-314 - Lingfeng Shen, Haiyun Jiang, Lemao Liu, Shuming Shi:
Sen2Pro: A Probabilistic Perspective to Sentence Embedding from Pre-trained Language Model. 315-333 - Xudong Hong, Vera Demberg, Asad B. Sayeed, Qiankun Zheng, Bernt Schiele:
Visual Coherence Loss for Coherent and Visually Grounded Story Generation. 334-346
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.