[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
I am currently a Researcher at I am currently a Researcher at TikTok AI Innovation Center in Singapore 🇸🇬, where we are doing cutting-edge AI research in large language models, multimodal systems, and code intelligence 💻. As the head of the code research team, I drive innovations in automated programming, code understanding, and developer productivity tools. Our research focuses on advancing the state-of-the-art in code generation, program synthesis, and intelligent code analysis systems. Prior to TikTok, I led the development of multilingual language models (Sailor and Sailor2) as a Research Scientist at Sea AI Lab 🌊.

🚀 We're growing our code research team and actively recruiting Researchers and Research Engineers! If you're passionate about advancing the frontier of code intelligence and large language models, we'd love to hear from you. We offer competitive compensation and an excellent research environment (based in Singapore / Beijing / Shanghai). Feel free to reach out via email: qian dot liu at bytedance dot com ✉️

My primary research interests lie in natural language processing, with a focus on code intelligence and generation, structured data understanding, and natural language reasoning. I completed my Ph.D. through a joint program between Beihang University and Microsoft Research Asia, where I was fortunate to be advised by Jian-Guang Lou and Bei Chen. My doctoral research focused on semantic parsing - a challenging area that bridges natural language understanding and program synthesis by translating natural language instructions into executable formal programs. In my thesis, I introduced novel approaches for building semantic parsing systems that are efficient, generalizable, and interactive.

Prior to my research career, I served as a lead teaching assistant for several foundational computer science courses, including Computer Organization, Operating Systems, and Software Engineering. During this time, I authored a comprehensive Operating Systems Laboratory Manual that has helped guide students through hands-on system programming exercises. In 2017, recognizing the need for structured mentorship, I founded S.T.A.R. (Student Teaching Assistant Resources), the first undergraduate teaching assistant organization at our university. Through S.T.A.R., I established a mentoring framework to help new teaching assistants develop their pedagogical skills and create a supportive community of student educators.

For more details, check my CV or hit me up on my email.

✨ News

[2024.11] Spider 2.0 is out! Check out our new paper on evaluating language models on real-world enterprise text-to-SQL workflows!

[2024.11] Checkout our OpenCoder, the open cookbook for top-tier code large language models!

[2024.09] Check out Programming Every Example (ProX), our work on scaling data quality improvements!

[2024.07] Our RegMix paper and Scaling Laws with Vocabulary paper are available on arXiv!

[2024.06] We released BigCodeBench, a new benchmark for evaluating code generation!

📝 Selected Publications (Full Publications on Google Scholar)

Spider 2.0: Evaluating Language Models on Real-World Enterprise Text-to-SQL Workflows
Fangyu Lei*, Jixuan Chen*, Yuxiao Ye, Ruisheng Cao, Dongchan Shin, Hongjin Su, Zhaoqing Suo, Hongcheng Gao, Wenjing Hu, Pengcheng Yin, Victor Zhong, Caiming Xiong, Ruoxi Sun, Qian Liu, Sida Wang, Tao Yu (* = Equal Contribution)
PDF | Github

OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models
Siming Huang*, Tianhao Cheng*, J.K. Liu, Jiaran Hao, Liuyihan Song, Yang Xu, J. Yang, J.H. Liu, Chenchen Zhang, Linzheng Chai, Ruifeng Yuan, Zhaoxiang Zhang, Jie Fu, Qian Liu, Ge Zhang, Zili Wang, Yuan Qi, Yinghui Xu, Wei Chu (* = Equal Contribution)
PDF | Github | Dataset

Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale
Fan Zhou*, Zengzhi Wang*, Qian Liu, Junlong Li, Pengfei Liu (* = Equal Contribution)
PDF | Dataset | Data Quality Models

Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies
Chaofan Tao, Qian Liu, Longxu Dou, Niklas Muennighoff, Zhongwei Wan, Ping Luo, Min Lin, Ngai Wong
NeurIPS 2024
PDF | Github | Vocab Size Calculator

RegMix: Data Mixture as Regression for Language Model Pre-training
Qian Liu*, Xiaosen Zheng*, Niklas Muennighoff, Guangtao Zeng, Longxu Dou, Tianyu Pang, Jing Jiang, Min Lin (* = Equal Contribution)
PDF | Github | Dataset | Demo

BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions
Terry Yue Zhuo, Minh Chien Vu, Jenny Chim, Han Hu, Wenhao Yu, Ratnadira Widyasari, Imam Nur Bani Yusuf, Haolan Zhan, Junda He, Indraneil Paul, Simon Brunner, Chen Gong, Thong Hoang, Armel Randy Zebaze, Xiaoheng Hong, Wen-Ding Li, Jean Kaddour, Ming Xu, Zhihan Zhang, Prateek Yadav, Naman Jain, Alex Gu, Zhoujun Cheng, Jiawei Liu, Qian Liu, Zijian Wang, David Lo, Binyuan Hui, Niklas Muennighoff, Daniel Fried, Xiaoning Du, Harm de Vries, Leandro Von Werra
PDF | LeaderBoard

Sailor: Open Language Models for South-East Asia
Longxu Dou*, Qian Liu*, Guangtao Zeng, Jia Guo, Jiahui Zhou, Xin Mao, Ziqi Jin, Wei Lu, Min Lin (* = Equal Contribution)
EMNLP 2024 (Demo)
PDF | Blog | Github | Chat with Sailor

StarCoder 2 and The Stack v2: The Next Generation
Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, Tianyang Liu, Max Tian, Denis Kocetkov, Arthur Zucker, Younes Belkada, Zijian Wang, Qian Liu, Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, Wen-Ding Li, Megan Risdal, Jia Li, Jian Zhu, Terry Yue Zhuo, Evgenii Zheltonozhskii, Nii Osae Osae Dade, Wenhao Yu, Lucas Krauß, Naman Jain, Yixuan Su, Xuanli He, Manan Dey, Edoardo Abati, Yekun Chai, Niklas Muennighoff, Xiangru Tang, Muhtasham Oblokulov, Christopher Akiki, Marc Marone, Chenghao Mou, Mayank Mishra, Alex Gu, Binyuan Hui, Tri Dao, Armel Zebaze, Olivier Dehaene, Nicolas Patry, Canwen Xu, Julian McAuley, Han Hu, Torsten Scholak, Sebastien Paquet, Jennifer Robinson, Carolyn Jane Anderson, Nicolas Chapados, Mostofa Patwary, Nima Tajbakhsh, Yacine Jernite, Carlos Muñoz Ferrandis, Lingming Zhang, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, Harm de Vries
PDF | Blog | Model

Self-Distillation Bridges Distribution Gap in Language Model Fine-Tuning
Zhaorui Yang, Tianyu Pang, Haozhe Feng, Han Wang, Wei Chen, Minfeng Zhu, Qian Liu
ACL 2024
PDF | Github

Beyond Memorization: The Challenge of Random Memory Access in Language Models
Tongyao Zhu, Qian Liu, Liang Pang, Zhengbao Jiang, Min-Yen Kan, Min Lin
ACL 2024
PDF | Github

OctoPack: Instruction Tuning Code Large Language Models
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
ICLR 2024 (Spotlight)
PDF | Github | Dataset

S3Eval: A Synthetic, Scalable, Systematic Evaluation Suite for Large Language Models
Fangyu Lei*, Qian Liu*, Yiming Huang*, Shizhu He, Jun Zhao, Kang Liu (* = Equal Contribution)
NAACL 2024
PDF | Github

OpenAgents: An Open Platform for Language Agents in the Wild
Tianbao Xie*, Fan Zhou*, Zhoujun Cheng*, Peng Shi*, Luoxuan Weng*, Yitao Liu*, Toh Jing Hua, Junning Zhao, Qian Liu, Che Liu, Leo Z. Liu, Yiheng Xu, Hongjin Su, Dongchan Shin, Caiming Xiong, Tao Yu (* = Equal Contribution)
COLM 2024
PDF | Github | Homepage | Video

Lemur: Harmonizing Natural Language and Code for Language Agents
Yiheng Xu*, Hongjin Su*, Chen Xing*, Boyu Mi, Qian Liu, Weijia Shi, Binyuan Hui, Fan Zhou, Yitao Liu, Tianbao Xie, Zhoujun Cheng, Siheng Zhao, Lingpeng Kong, Bailin Wang, Caiming Xiong, Tao Yu (* = Equal Contribution)
ICLR 2024 (Spotlight)
PDF | Github | Homepage | Model | Media

LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition
Chengsong Huang*, Qian Liu*, Bill Yuchen Lin*, Tianyu Pang, Chao Du, Min Lin (* = Equal Contribution)
COLM 2024
PDF | Github | Homepage | Media | Media (Chinese)

Active Retrieval Augmented Generation
Zhengbao Jiang*, Frank F. Xu*, Luyu Gao*, Zhiqing Sun*, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, Graham Neubig (* = Equal Contribution)
EMNLP 2023
PDF | Github | LangChain Integration

Generative Table Pre-training Empowers Models for Tabular Prediction
Tianping Zhang, Shaowen Wang, Shuicheng Yan, Jian Li, Qian Liu
EMNLP 2023
PDF | Github | Model

StarCoder: may the source be with you!
Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, Harm de Vries
TMLR 2023
PDF | Github | Model | Blog

Learning on Large-scale Text-attributed Graphs via Variational Inference
Jianan Zhao, Meng Qu, Chaozhuo Li, Hao Yan, Qian Liu, Rui Li, Xing Xie, Jian Tang
ICLR 2023 (Oral)
PDF | Github

SantaCoder: don't reach for the stars!
Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra
Best Paper Award of DL4C @ ICLR 2023
PDF | Model

Reasoning Like Program Executors
Xinyu Pi*, Qian Liu*, Bei Chen, Morteza Ziyadi, Zeqi Lin, Yan Gao, Qiang Fu, Jian-Guang Lou, Weizhu Chen (* = Equal Contribution)
Distinguished Contribution Award (2/300+) on Microsoft 2022 MLADS Spring | EMNLP 2022 (Oral)
PDF | Video

TAPEX: Table Pre-training via Learning a Neural SQL Executor
Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou
ICLR 2022
PDF| Slides | Github| Cite| Homepage | Video(Chinese) | Blog | Model

ReTraCk: A Flexible and Efficient Framework for Knowledge Base Question Answering
Shuang Chen*, Qian Liu*, Zhiwei Yu*, Chin-Yew Lin, Jian-Guang Lou, Feng Jiang (* = Equal Contribution)
ACL 2021 (Demo)
PDF | Github | Cite | Video

Compositional Generalization by Learning Analytical Expressions
Qian Liu*, Shengnan An*, Jian-Guang Lou, Bei Chen, Zeqi Lin, Yan Gao, Bin Zhou, Nanning Zheng, Dongmei Zhang (* = Equal Contribution)
NeurIPS 2020 (Spotlight)
PDF | Slides | Github | Cite | Video | Video(Chinese) | Blog(Chinese)

💬 Talks

[2022. 10-12] Language Pre-training without Natural Language (Invited Talk)
📖 Language model with large-scale textual data has been successful but lacks reasoning ability due to limited reasoning data. This talk suggests using programs instead of language for pre-training corpus to improve reasoning in tasks such as tabular, numerical, and spatial reasoning.
Slides | Video

Venue: Carnegie Mellon University (CMU) Host: Frank Xu
Venue: Sigma Computing Host: Madelon Hulsebos
Venue: National University Singapore (NUS) Host: Prof. Min-Yen Kan
Venue: Singapore University of Technology & Design (SUTD) Host: Prof. Wei Lu
Venue: Nanyang Technological University (NTU) Host: Prof. Luu Anh Tuan

[2022. 09] Introduction to Language Models (Tutorial)
📖 The tutorial will give a brief overview of mainstream language model architectures (ELMo, GPT, BERT), giant language models (GPT3, Chinchilla), retrieval-based language models (REALM, kNN-LM), and interesting trends (scaling law, instruction following, parameter efficiency).
Slides

Venue: Sea AI Lab (SAIL)

[2022. 06] Semantic Parsing of Natural Language from Weakly Labeled Data (Ph.D. Defense)
📖 Focus on compositional and domain generalization of semantic parsing, answer-driven semantic parsing under weak supervision, and conversational semantic parsing under semi-supervision.
Slides(Chinese) | Thesis(Chinese)

Venue: Beihang University (BUAA) Host: Prof. Maosong Sun

[2022.01-02] Towards Data-Efficient Semantic Parsing (Job Talk)
📖 Build methods to improve semantic parsers' performance and generalization capacity under program data, task data, or even no data, and integrated the research into real product PowerApp.
Slides

Venue: Sea AI Lab (SAIL) Host: Dr. Min Lin
Venue: Microsoft Research Asia (MSRA) Host: Dr. Jian-Guang Lou

[2022. 01] How to Find a Research Job in Industry (Seminar)
📖 Discuss the critical processes in seeking a good job, such as resume preparation, coding exercises, project discussions, and behavior questions.
Video(Chinese)| Slides(Chinese)

Venue: MLNLP Community Host: Bei Li

[2021.07] On the Future of Semantic Parsing (Seminar)
📖 Discuss the past, current and future of semantic parsing with other rising stars in semantic parsing.
Video(Chinese)| Blog(Chinese)

Venue: AI TIME Speaker: Dr. Pengcheng Yin, Dr.Ziyu Yao, Dr.Bailin Wang

📞 Contact

Please feel free to contact me via my email (left) if you are interested in our papers, my experience, or you just have any problem on research which I may help.
Beihang University
2013 - 2017
Microsoft Research Asia
2017 - 2022
Sea AI Lab
2022 - 2024
TikTok
2024 - Present