-
Stanford, Mila, McGill
- Palo Alto, Montreal, Chengdu
-
16:07
(UTC -04:00) - https://haolun-wu.github.io/
- https://orcid.org/0000-0001-6255-1535
- @Haolun_Wu0203
- in/haolun-wu-23ba08133
Highlights
- Pro
Stars
"what, how, where, and how well? a survey on test-time scaling in large language models" repository
[TMLR 2025 & ICLR 2025 DeLTa] Official Implementation of Design Editing for Offline Model-based Optimization 🧬 🤖
Generate Nike-Style Product Descriptions with this Scraped Dataset
Code for the NeurIPS'24 paper "Density-based User Representation using Gaussian Process Regression for Multi-interest Personalized Retrieval"
Code for the EMNLP'24 paper "Learning to Extract Structured Entities Using Language Models"
Code for the paper "Aligning LLM Agents by Learning Latent Preference from User Edits".
A curated list of reinforcement learning with human feedback resources (continually updated)
📰 Must-Read Papers on Offline Model-Based Optimization 🔥
📰 Must-read papers on Diffusion Models for Text Generation 🔥
A list of awesome papers and resources of recommender system on large language model (LLM).
Code accompanying ACM TORS paper "Evaluation Measures of Individual item Fairness for Recommender Systems: A Critical Study" (Accepted in 2023)
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
Query Auto-Completion for Rare Prefixes
Tutorial on how to use the SHAP library to explain the feature importance with Shapley values.
Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge Distillation
Source code for Twitter's Recommendation Algorithm
Schedule and Syllabus for Human-Centered Machine learning.