-
University of Southampton
- Southampton, England
-
21:42
(UTC +01:00) - chen-x666.github.io
Lists (1)
Sort Name ascending (A-Z)
Stars
A Bittensor subnet for collecting and storing valuable data for other subnets
[ICML 2024] Binoculars: Zero-Shot Detection of LLM-Generated Text
A very simple GRPO implement for reproducing r1-like LLM thinking.
Minimal reproduction of DeepSeek R1-Zero
NLPCC-2025 Shared-Task 1: LLM-Generated Text Detection
Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends
Fully open reproduction of DeepSeek-R1
Constraint Back-translation Improves Complex Instruction Following of Large Language Models
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
[NeurIPS 2024 D&B] DetectRL: Benchmarking LLM-Generated Text Detection in Real-World Scenarios
An Open Source Implementation of Anthropic's Paper: "Towards Monosemanticity: Decomposing Language Models with Dictionary Learning"
《动手学深度学习》:面向中文读者、能运行、可讨论。中英文版被70多个国家的500多所大学用于教学。
Reading list for research topics in multimodal machine learning
(NAACL 2024) Official code repository for Mixset.
👾 A Python API wrapper for Poe.com. With this, you will have free access to GPT-4, Claude, Llama, Gemini, Mistral and more! 🚀
This is the repo for the survey of Bias and Fairness in IR with LLMs.
⏰ Collaboratively track deadlines of conferences recommended by CCF (Website, Python Cli, Wechat Applet) / If you find it useful, please star this project, thanks~
Aligning pretrained language models with instruction data generated by themselves.
Representation Engineering: A Top-Down Approach to AI Transparency
Differentially-private transformers using HuggingFace and Opacus
[ICML 2024 Spotlight] Differentially Private Synthetic Data via Foundation Model APIs 2: Text
A Unified Framework for Quantifying Privacy Risk in Synthetic Data according to the GDPR
We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts at each layer. We stack two such layers, feeding the output…
🤫 Code and benchmark for our ICLR 2024 spotlight paper: "Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory"
This repository contains related work, benchmarks and datasets for the paper "Large Language Models in Finance (FinLLMs)", currently under review.
Conifer: Improving Complex Constrained Instruction-Following Ability of Large Language Models
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath