default search action
23rd CCL 2024, Taiyuan, China
- Maosong Sun, Jiye Liang, Xianpei Han, Zhiyuan Liu, Yulan He, Gaoqi Rao, Yubo Chen, Zhiliang Tian:
Chinese Computational Linguistics - 23rd China National Conference, CCL 2024, Taiyuan, China, July 25-28, 2024, Proceedings. Lecture Notes in Computer Science 14761, Springer 2025, ISBN 978-981-97-8366-3
Information Retrieval, Text Classification and QA
- Zhiyu Yang, Shuo Wang, Yukun Yan, Pengyuan Liu, Dong Yu:
Enhancing Free-Form Table Question Answering Models by Distilling Relevant-Cell-Based Rationales. 3-18 - Shijun Wang, Han Zhang, Zhe Yuan:
Enhancing Sequence Representation for Personalized Search. 19-36 - Yaqi Sun, Jing Yun, Zhuoqun Ma:
Joint Similarity Guidance Hash Coding Based on Adaptive Weight Mixing Strategy For Cross-Modal Retrieval. 37-53
Text Generation, Dialogue and Summarization
- Huidong Du, Hao Sun, Pengyuan Liu, Dong Yu:
Generate-then-Revise: An Effective Synthetic Training Data Generation Framework for Event Detection. 57-72
Machine Translation and Multilingual Information Processing
- Linqing Chen, Weilei Wang, Dongyang Hu:
E3: Optimizing Language Model Training for Translation via Enhancing Efficiency and Effectiveness. 75-90 - Zhenguo Zhang, Jianjian Liu, Ying Li:
Multi-features Enhanced Multi-task Learning for Vietnamese Treebank Conversion. 91-105 - Menglong Xu, Yanliang Zhang:
SimCLNMT: A Simple Contrastive Learning Method for Enhancing Neural Machine Translation Quality. 106-119 - Pengcheng Huang, Yongyu Mu, Yuzhang Wu, Bei Li, Chunyang Xiao, Tong Xiao, Jingbo Zhu:
Translate-and-Revise: Boosting Large Language Models for Constrained Translation. 120-139
Knowledge Graph and Information Extraction
- Hui Zhao, Di Zhao, Jiana Meng, Shuang Liu, Hongfei Lin:
A Multi-task Biomedical Named Entity Recognition Method Based on Data Augmentation. 143-157 - Lishuang Li, Liteng Mi, Beibei Zhang, Yi Xiang, Yubo Feng, Xueyang Qin, Jingyao Tang:
Biomedical Event Causal Relation Extraction by Reasoning Optimal Entity Relation Path. 158-173 - Yili Qian, Enlong Ren, Haonan Xu:
Joint Entity and Relation Extraction Based on Bidirectional Update and Long-Term Memory Gate Mechanism. 174-190 - Jiatong Li, Kui Meng:
MFE-NER: Multi-feature Fusion Embedding for Chinese Named Entity Recognition. 191-204 - Baofeng Li, Jianguo Tang, Yu Qin, Yuelou Xu, Yan Lu, Kai Wang, Lei Li, Yanquan Zhou:
UDAA: An Unsupervised Domain Adaptation Adversarial Learning Framework for Zero-Resource Cross-Domain Named Entity Recognition. 205-221
Social Computing and Sentiment Analysis
- Jiayi Huang, Lishuang Li, Xueyang Qin, Yi Xiang, Jiaqi Li, Yubo Feng:
Triple-view Event Hierarchy Model for Biomedical Event Representation. 225-239
NLP Applications
- Jie Zhou, Shengxiang Gao, Zhengtao Yu, Ling Dong, Wenjun Wang:
DialectMoE: An End-to-End Multi-dialect Speech Recognition Model with Mixture-of-Experts. 243-258 - Chu Yuan Zhang, Jiangyan Yi, Jianhua Tao, Chenglong Wang, Xinrui Yan:
Distinguishing Neural Speech Synthesis Models Through Fingerprints in Speech Waveforms. 259-273 - Qiuyu Liang, Weihua Wang, Lei Lv, Feilong Bao:
Knowledge Graph-Enhanced Recommendation with Box Embeddings. 274-288 - Jingshen Zhang, Xinglu Chen, Xinying Qiu, Zhimin Wang, Wenhe Feng:
Readability-Guided Idiom-Aware Sentence Simplification (RISS) for Chinese. 289-310
Fundamental Theory and Method of Language Computing and Cognition
- Ya Li:
A Tone-Based Hierarchical Structure of Chinese Prosody. 313-326 - Binghao Tang, Boda Lin, Si Li:
Linguistic Guidance for Sequence-to-Sequence AMR Parsing. 327-340
Language Resource and Evaluation
- Lin Zhu, Meng Xu, Wenya Guo, Jingsi Yu, Liner Yang, Zehuang Cao, Yuan Huang, Erhong Yang:
Automatic Construction of the English Sentence Pattern Structure Treebank for Chinese ESL Learners. 343-361 - Yujie Wang, Chao Huang, Liner Yang, Zhixuan Fang, Yaping Huang, Yang Liu, Jingsi Yu, Erhong Yang:
Cost-Efficient Crowdsourcing for Span-Based Sequence Labeling: Worker Selection and Data Augmentation. 362-386 - Ruoxi Xu, Hongyu Lin, Xinyan Guan, Yingfei Sun, Le Sun:
DLUE: Benchmarking Document Language Understanding. 387-401 - Shisen Yue, Siyuan Song, Xinyuan Cheng, Hai Hu:
Do Large Language Models Understand Conversational Implicature - A Case Study with a Chinese Sitcom. 402-418 - Yan Zhao, Jiangyan Yi, Jianhua Tao, Chenglong Wang, Yongfeng Dong:
EmoFake: An Initial Dataset for Emotion Fake Audio Detection. 419-433 - Wenbiao Li, Rui Sun, Tianyi Zhang, Yunfang Wu:
Going Beyond Passages: Readability Assessment for Book-Level Long Texts. 434-450 - Hongli Zhou, Hui Huang, Yunfei Long, Bing Xu, Conghui Zhu, Hailong Cao, Muyun Yang, Tiejun Zhao:
Mitigating the Bias of Large Language Model Evaluation. 451-462 - Qi Huang, Han Fu, Wenbin Luo, Mingwen Wang, Kaiwei Luo:
PPDAC: A Plug-and-Play Data Augmentation Component for Few-Shot Extractive Question Answering. 463-481 - Jieyu Lin, Honghua Chen, Nai Ding:
Sentence-Space Metrics (SSM) for the Evaluation of Sentence Comprehension. 482-502
Large Language Models
- Jiajia Huang, Haoran Zhu, Chao Xu, Tianming Zhan, Qianqian Xie, Jimin Huang:
AuditWen: An Open-Source Large Language Model for Audit. 505-521 - Xiao Liu, Ying Li, Zhengtao Yu:
Chinese Grammatical Error Correction via Large Language Model Guided Optimization Training. 522-539 - Chunkang Zhang, Boxi Cao, Yaojie Lu, Hongyu Lin, Liu Cao, Ke Zeng, Guanglu Wan, Xunliang Cai, Xianpei Han, Le Sun:
Pattern Shifting or Knowledge Losing? A Forgetting Perspective for Understanding the Effect of Instruction Fine-Tuning. 540-554 - Hang Zhou, Chenglong Wang, Yimin Hu, Tong Xiao, Chunliang Zhang, Jingbo Zhu:
Prior Constraints-Based Reward Model Training for Aligning Large Language Models. 555-570 - Wenjuan Han, Xiang Wei, Xingyu Cui, Ning Cheng, Guangyuan Jiang, Weinan Qian, Chi Zhang:
Prompt Engineering 101 Prompt Engineering Guidelines from a Linguistic Perspective. 571-592
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.